For the latest stable version, please use Emilua API 0.10!

Alternative projects

General concurrency models
Fibers Threads Local actors Distributed actors Sandboxed actors[1]

cqueues[2]

Tarantool[3]

Effil[4]

Lanes[5]

Löve[6]

ConcurrentLua[7]

luaproc[8]

Emilua

Do notice that the table won’t go into many details. For instance, several projects allow you to use threads, but only Emilua is flexible enough that it actually allows you to create heterogeneous thread pools where some thread may be pinned to a single Lua VM while another thread is shared among several Lua VMs, and some work-stealing thread pool takes care of the rest. Too many tables would be needed to explore all the other differences.

Integrated IO engine also belongs to the comparison of concurrency models, but a separate table solely focused on them will be presented later (only mentioning the projects that do have one).

NodeJS wannabes
Fibers Threads Local actors Sandboxed actors

Luvit[9]

LuaNode[10]

nodish[11]

Emilua (not a NodeJS wannabe)

When you create a project that tries to bring together the best of two worlds, you’re also actually bringing together the worst of two worlds. This sums up most of the attempts to mirror NodeJS API:

  • If everything is implemented correctly, it can only achieve being as bad as NodeJS is.

  • Horrible back-pressure.

IO engines
Linux (epoll) Linux (io_uring) BSD (kqueue) Windows

cqueues

Tarantool

Luvit

LuaNode

nodish

ugly[12]

Emilua

This document deliberately left some projects out of the comparison tables. The underlying reason is that it focuses on one problem space: the traditional userspace-in-a-modern-OS-box. Projects such as eLua[13], NodeMCU[14], XDPLua[15], and Snabb[16] will always have a space in the market. And the reason is quite simple: it’s not possible to cater for very specific needs and be general at the same time. For instance, if you’re trying to run something on the kernel side, there are specific restrictions and concerns that will further contaminate every dependant project down the line. It’s not merely a question of porting the same API over. The mindset behind the whole program would need to change as well.

Emilua is young and there are plans to explore part of use cases that stretch just a little over the traditional userspace-in-a-modern-OS-box. However it still is a general cross-platform solution for an execution engine. It’s still not possible to tackle very specific use cases and be general at the same time.

OpenResty

Most of the languages are not designed to make the programmer worry about memory allocation failing. Lua is no different. If you want to deal with resource exhaustion, C and C++ are the only good choices.

A web server written in lua exposed directly to the web is rarely a good idea as it can’t properly handle allocation failures or do proper resource management in a few other areas.

OpenResty’s core is a C application (nginx). The lua application that can be written on top is hosted by this C runtime that is well aware of the connections, the process resources and its relationships to each lua-written handler. The runtime then can perform proper resource management. Lua is a mere slave of this runtime, it doesn’t really own anything.

This architecture works quite well while gives productivity to the web application developer. Emilua can’t just compete with OpenResty. Go for OpenResty if you’re doing an app exposed to the wide web.

Emilua can perform better for client apps that you deliver to customers. For instance, you might develop a torrent client with Emilua and it would work better than OpenResty. Emilua HTTP interface is also designed more like a gateway interface, so we can, in the future, implement this interface as an OpenResty lib to easily allow porting apps over.

  • Emilua can also be used behind a proper server.

  • Emilua can be used to quickly prototype the architecture of apps to be written later in C++ using an API that resembles Boost.Asio a lot (and IOFiber will bring them even closer).

  • In the future, Emilua will be able to make use of native plug-ins so you can offload much of the resource management.

  • Emilua apps can do some level of resource (under)management by restricting the number of connections/fibers/…​

  • Emilua won’t be that bad given its defaults (active async style, no implicit write buffer to deal with concurrent writes, many abstractions designed with back-pressure in mind, …​).

  • The actor model opens up some possibilities for Emilua’s future (e.g. partition your app resources among multiple VMs and feel free to kill the bad VMs).