I was looking for some convenient solution for simple self-hosted commenting, but instead was welcomed with a mix of personal playgrounds and vibecoding stands which are called “full-stack development” nowadays. I spent several days playing with those 3 commenting systems, studying and modifying their code, so this is not going to be a set of one-paragraph LLM-generated reviews, however, I’m still not deeply familiar with their codebase, so you might call it a “superficial code review”. In order of the encounter, here it goes…

Remark42

I’m pretty sure lots of you already read Dan Abramov’s article “You Might Not Need Redux”, which I usually prefer to title “drop Redux and never touch it again… No, you don’t need it”. Well, the man doing the frontend implementation obviously missed the article. Tons of Redux boilerplate, reusable components that are never reused, tests for every small little property, etc. I will not quote all of it, there is just too much of

[4.0K]  ./frontend/apps/remark42/app/components/profile/
├── [4.0K]  components
│   └── [4.0K]  counter
│       ├── [ 261]  counter.module.css
│       ├── [ 320]  counter.spec.tsx
│       ├── [ 242]  counter.tsx
│       └── [  37]  index.ts
├── [  27]  index.ts
├── [3.1K]  profile.module.css
├── [8.7K]  profile.spec.tsx
└── [8.2K]  profile.tsx

I mean compared to the general overengineering/overmodularization even this ASCII-drawing-like JSX looks fine:

    {formFooterJSX}
  </>
) : (
  <>
    {hasOAuthProviders && (

Official remark42 frontend was started circa 2019 while IE 11 was still an issue, so React-like framework was fine. But even for React you don’t usually deploy via
https://github.com/umputun/remark42/blob/bc612ddf8192057e587f3880a64a70bf513ac628/docker-init.sh#L4

find . -regex '.*\.\(html\|js\|mjs\)$' -print -exec sed -i "s|{% REMARK_URL %}|${REMARK_URL}|g" {} \;

“If the front end is poorly implemented then should I trust the back end?” — I thought. I came harsh on it at first: “why do I need the lastest Go compiler to develop a commenting service?” and “why does a simple commenting requires 140 dependencies?”. Gosh, I was so naive. Over time I came to a conclusion the back end is rock solid. And the storage engine, BoltDB? Man, I thought employing hierarchical DB to store hierarchical data went out of style, but there it is, the DB engine actually picked for the data model, not data model stuffed into unsuitable storage.

I almost wanted to argue on redis cache usage until I realized there is no redis cache in Remark42, it’s only cache invalidation via Redis pub/sub (still not sure why it’s needed here):
https://github.com/umputun/remark42/pull/736

So where did the stark contrast of front and back end come from? Well, it appears the backend was implemented a very long time ago, while the frontend is a more recent implementation and it was done by different people. Arguably, re-implementation of frontend won the existing backend could be an absolute hit. Now let’s go the the next item.

Artalk

This Github project has so many stars you can tell I’m jelous. My whole life I was horrible at adapting to popular opinions. And I was also tricked into thinking that’s a new champion in the house, because of how this project follows the ideology of:
— How many features do you want to have?
— Yes!
It supports all the databases, all the caches, comment moderation, captchas and other anti-spam, visitor stats, and god knows what else. Any LLM would tell you that Artalk has (any) feature you want — which is exactly why I went for Artalk as a second option.

Remember I was whining about Remark42 dependencies? It’s 430 go deps for Artalk, and some are as large as aws-sdk-go. Just by dropping redis, memcached, and swagger I managed to reduce binary size 1.5-fold, from 62 to 43 Mb. Yeah, the original binary actually contains self-hosted API docs, whether you want it or not (you cannot disable it) — that’s how deeply they went into “having all the features”.

I think the recent SHA1-HULUD really reminded people that you should not collect the crap from all over github at your project. Hopefully some people would wake up.

So that was the surface. And here go some code samples.
https://github.com/ArtalkJS/Artalk/blob/20be96f6f874c1344c2f1a71a698c68d09e9fff8/ui/artalk/src/artalk.ts#L36-L43

;(() => {
  // Init event manager
  EventsService(ctx)

  // Init config service
  ConfigService(ctx)
})()

So it’s IIFE. The problem is this IIFE doing nothing, it’s not serving any purpose. It seems to be either neuro or human slop. The next example is harder:
https://github.com/ArtalkJS/Artalk/blob/71a651153bc68267a8b86c4ee88c597bd4097071/ui/artalk/src/editor/editor.ts#L8

export interface EditorOptions {
  getEvents: () => EventManager
  getConf: () => ConfigManager
}

export class Editor implements IEditor {
  constructor(opts: EditorOptions) {
    this.opts = opts
  }
  getPlugins() {
    return this.plugins
  }

  setPlugins(plugins: PluginManager) {
    this.plugins = plugins
  }

  setContent(val: string) {
    this.ui.$textarea.value = val

    // plug hook: content updated
    this.getPlugins()?.getEvents().trigger('content-updated', val)
  }

  submit() {
    const next = () => {
      this.getPlugins()?.getEvents().trigger('editor-submit')
      this.opts.getEvents().trigger('editor-submit')
    }
}

I will not quote the whole plugin and service locator code, but there we have duplication in the architecture: first dep-injected service locator queried via this.opts.getEvents(), and second ad-hoc this.getPlugins() to consume this.setPlugins() value in violation of the previous dep injection. Sometimes one mechanism is employed, sometimes both. I will not quote the horrible implementation of the injection service to explain why it failed to provide a true factory injection, but it did fail so the ad-hoc slop was put on top of it. Which is once again a symptom of LLM-generated code.

The death blow comes at the server side.
https://github.com/ArtalkJS/Artalk/blob/71a651153bc68267a8b86c4ee88c597bd4097071/internal/dao/migrate.go#L23-L27

// Delete all foreign key constraints
// Leave relationship maintenance to the program and reduce the difficulty of database management.
// because there are many different DBs and the implementation of foreign keys may be different,
// and the DB may not support foreign keys, so don't rely on the foreign key function of the DB system.
dao.DropConstraintsIfExist()

So, DB foreign keys? They create too many errors, let’s just drop them. Oh, now we have data inconsistencies, what should we do next?

func (dao *Dao) MergePages() {
	// merge pages with same key and site_name, sum pv
	pages := []*entity.Page{}

	// load all pages
	if err := dao.DB().Order("id ASC").Find(&pages).Error; err != nil {
		log.Fatal("Failed to load pages. ", err.Error)
	}
	beforeLen := len(pages)

	// merge pages
	mergedPages := map[string]*entity.Page{}
	for _, page := range pages {
		key := page.SiteName + page.Key
		if _, ok := mergedPages[key]; !ok {
			mergedPages[key] = page
		} else {
			mergedPages[key].PV += page.PV
			mergedPages[key].VoteUp += page.VoteUp
			mergedPages[key].VoteDown += page.VoteDown
		}
	}

	// delete all pages
	dao.DB().Where("1 = 1").Delete(&entity.Page{})

	// insert merged pages
	pages = []*entity.Page{}
	for _, page := range mergedPages {
		pages = append(pages, page)
	}
	if err := dao.DB().CreateInBatches(pages, 1000); err.Error != nil {
		log.Fatal("Failed to insert merged pages. ", err.Error)
	}

Yeah, load the data into memory, drop it all, then recreate — that’s how you keep your database consistent. Questions?

Redis, memcached? It will not be more inconsistent than it is already.

Comentario

I don’t know why I picked it next, but I did. It sounded like a promising rebirth of stale Commento.

I will not delve much into Angular part, I dislike Angular a priori because it’s the perfect answer to the question of “how do you pick a good architecture?” — you just always pick a suboptimal architecture, unconditionally, because everybody is expecting you to employ it before writing a single line of logic, hence you are no longer “picking suboptimal architecture blindly” — you are “playing by team rules” now.

Let me explain you the problem by describing the server side of things where you don’t have Angular’s constraints.
If you want to create a comment, you do:
https://gitlab.com/comentario/comentario/-/blob/3804787469cc18a1b39b11f3b362da24570baa31/internal/svc/impex_svc.go#L103

Services.CommentService(nil).Create(c);

which creates CommentService wrapper
https://gitlab.com/comentario/comentario/-/blob/3804787469cc18a1b39b11f3b362da24570baa31/internal/svc/manager.go#L32

type ServiceManager interface {
    CommentService(tx *persistence.DatabaseTx) CommentService
    ...
}

https://gitlab.com/comentario/comentario/-/blob/3804787469cc18a1b39b11f3b362da24570baa31/internal/svc/manager.go#L105

// dbTxAware is a database implementation of persistence.Tx
type dbTxAware struct {
	tx *persistence.DatabaseTx // Optional transaction
	db *persistence.Database   // Connected database instance
}

// dbx returns a database executor to be used with database statements and queries: the transaction, if set, otherwise
// the database itself
func (d *dbTxAware) dbx() persistence.DBX {
	if d.tx != nil {
		return d.tx
	}
	if d.db == nil {
		panic("dbTxAware.dbx: db not assigned")
	}
	return d.db
}

func (m *serviceManager) CommentService(tx *persistence.DatabaseTx) CommentService {
    return &commentService{dbTxAware{tx: tx, db: m.db}}
}

The CommentService itself, of course, will always be in a separate file:
https://gitlab.com/comentario/comentario/-/blob/3804787469cc18a1b39b11f3b362da24570baa31/internal/svc/comment_svc.go#L20

type CommentService interface {
    Create(comment *data.Comment) error
    ...
}

func (svc *commentService) Create(c *data.Comment) error {
    if err := persistence.ExecOne(svc.dbx().Insert("cm_comments").Rows(c));
    ...
}

Sounds good so far, right? We dragged the transaction through 4 layers of abstractions, but we had to do it to support different databased and transactions. Right? Yes, until you reach the lines:
https://gitlab.com/comentario/comentario/-/blob/3804787469cc18a1b39b11f3b362da24570baa31/internal/persistence/db.go#L169-180

func (db *Database) Begin() (*DatabaseTx, error) {
    // SQLite doesn't support concurrent writes (see e.g. https://github.com/mattn/go-sqlite3/issues/50,
    // https://github.com/mattn/go-sqlite3/issues/1179), so we'll avoid using them altogether to prevent intermittent
    // failures
    if db.dialect == dbSQLite3 {
        return nil, ErrTxUnsupported
        return nil, err
    } else if gtx, err := db.goquDB().Begin(); err != nil {
    } else {
        return &DatabaseTx{tx: gtx}, nil
    }
}

https://gitlab.com/comentario/comentario/-/blob/3804787469cc18a1b39b11f3b362da24570baa31/internal/persistence/db.go#L388-398

func (db *Database) WithTx(f func(tx *DatabaseTx) error) (err error) {
    // Try to initiate a transaction
    tx, err := db.Begin()
    if errors.Is(err, ErrTxUnsupported) {
        // Database doesn't provide (proper) transaction support: simply run the provided function without error
        // handling
        fErr := f(nil)
        if fErr != nil {
            logger.Errorf("Database.WithTx/func(): %v", fErr)
        }
        return fErr
    } 

So it seems like we did not need all this transaction nonsense all along. And the abstractions are doing exactly nothing, they were wrong from the start. I failed to trace any instances of serializable transactions anywhere in the project, so I assume on PostgreSQL all transaction run in “read committed” isolation mode, like
https://gitlab.com/comentario/comentario/-/blob/3804787469cc18a1b39b11f3b362da24570baa31/internal/api/restapi/handlers/user.go

if cntDel, err = svc.Services.CommentService(tx).DeleteByUser(&u.ID); err != nil {
...
} else if cntDel, err = svc.Services.CommentService(tx).MarkDeletedByUser(&user.ID, &u.ID); err != nil { 

Not every comment is a financial transaction. However, for comments eventual consistency is much more important than atomicity or fault tolerance. Countrary, both Artalk and Comentario went for fully-fledged ACID DB-s just to drop their consistency guarantees in the end.

Disclaimer

The projects I’ve mentioned still required lots of work and those are not a junior-dev results. I’m sure all the people involved are gladly working as respectable devs and consultants. It’s not that they are evil, it’s the consumer quality bar dropped so low that this kind of sloppiness has become totally acceptable, or even a requirement when you are expected to deliver features first and maybe latter ask “why?”.

However, if you just need to make your visitor feel engaged then you may consider employing Echochamber.js for the commenting as well.