Home

This is my personal blog. It's powered by mdbook.

The source code can be found here.

Cargo as a tool to distribute C/C++ executables


Date: 2021-05-04

If you didn't know already, Cargo, Rust's package manager, can be used to install executable binaries via cargo-install. The downside is that they have to be Rust binaries. That makes sense since it does build binaries from source, and naturally Cargo knows how to build Rust projects!

However, this functionality is too good to be exclusive to Rust projects. We'll see how we can also use cargo-install to install C/C++ executables.

In short, this works by redefining the C/C++ executable's main() function.

The technique described here is useful when an application lacks an official package, when the official package is outdated or when you don't wish to conflict with an already installed official package.

A similar technique was used to wrap Fluid (FLTK’s gui designer). However with C++, the redefined main function needs also to be preceded by extern "C" to avoid name mangling.

Back to the topic. We'll start with a simpler project. We'll create our Rust project:

$ cargo new cbin
$ cd cbin

We'll create a C binary which takes command-line arguments:

$ touch src/main.c
$ cat > src/main.c << EOF
> #include <stdio.h>
> int main(int argc, char **argv) {
>   if (argc > 1)
>     printf("Hello %s\n", argv[1]);
>   return 0;
> }
> EOF

To build the C binary, we'll need a build script, and we can use the cc crate along with it to make our lives a bit easier!

# Cargo.toml
[build-dependencies]
cc = "1.0"

Our build.rs file (which we create at the root of our project) would look something like:

// build.rs
fn main() {
    cc::Build::new()
        .file("src/main.c")
        .define("main", "cbin_main") // notice our define here!
        .compile("cbin");
}

Check everything builds by running cargo build, you'll notice cargo built a static library libcbin.a (or cbin.lib if using the msvc toolchain) in the OUT_DIR.

Lets now wrap the library call cbin_main which we had redefined in our build. Our src/main.rs should look like:

// src/main.rs
use std::env;
use std::ffi::CString;
use std::os::raw::*;

extern "C" {
    pub fn cbin_main(argc: c_int, argv: *mut *mut c_char) -> c_int;
}

fn main() {
    let mut args: Vec<_> = env::args()
        .into_iter()
        .map(|s| CString::new(s).unwrap().into_raw())
        .collect();
    let _ret = unsafe { cbin_main(args.len() as i32, args.as_mut_ptr()) };
}

This basically takes all command-line args passed to the Rust binary and passes them to the C binary (that we turned into a library!).

You can check by running:

$ cargo run -- world

Now you can run cargo install --path . to install your C binary wrapper. Or you can publish your crate to crates.io and your crate

Interfacing with the Objective-C runtime


Date: 2022-07-18

I recently released a proof-of-concept library wrapping several native widgets on Android and iOS. It's written in C++, and I've also released Rust bindings to it. In my post on the Rust subreddit announcing the release, a fellow redditor validly remarked "I'm a little surprised you wrapped a floui-rs around the Floui C++ project rather than just writing rust and calling into objc or the jni". I wasn't satisfied with my succinct answer, but I thought a Reddit reply wouldn't provide enough context to many reading it. So I decided to write this post. Just a note before diving in, floui's iOS code implementation is in Objective-C++ and requires a #define FLOUI_IMPL macro in at least one Objective-C++ source file, the rest of the gui code can be written in cpp files or .mm files since the interface is in C++. Regarding the JNI part, it's equally painful to write in C++ or Rust. So no point in discussing that.

Most Apple frameworks expose an Objective-C api, except for some which expose a C++ (DriverKit) or a Swift api (StoreKit2). That means that Objective-C is Apple's system's language par excellence, and other languages will need to be able to interface with it for any functionality provided by Apple in its frameworks. Interfacing with Objective-C isn't straightforward, and apart from Swift (and only on Apple platforms, it can't interface with gnustep for example), no other language can directly interface with it. Luckily, the objc runtime offers C functions which allow other languages to interface with Objective-C frameworks. And any language that can call C, can -via the objc runtime- interface with Objective-C.

In this post, I'll show how this can be done using C++ and Rust. The C++ version can be modified to C by just replacing auto with a concrete type.

As an example, we'll be creating an iOS app purely in C++ and then in Rust. I say pure in that there's no visible Objective-C code as far as the developer is concerned, however, this still calls into the UIKit framework which is an ObjC framework. The app will be the equivalent of the following Objective-C app:

// main.m or main.mm
#import <UIKit/UIKit.h>

@interface AppDelegate : UIResponder <UIApplicationDelegate>
@property(strong, nonatomic) UIWindow *window;
@end

@interface ViewController :UIViewController
@end

@implementation AppDelegate
- (BOOL)application:(UIApplication *)application
    didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
    CGRect frame = [[UIScreen mainScreen] bounds];
    self.window = [[UIWindow alloc] initWithFrame:frame];
    [self.window setRootViewController:[ViewController new]];
    self.window.backgroundColor = UIColor.whiteColor;
    [self.window makeKeyAndVisible];
    return YES;
}
@end

@implementation ViewController
- (void)clicked {
    NSLog(@"clicked");
}
- (void)viewDidLoad {
    [super viewDidLoad];
    UIButton *btn = [UIButton buttonWithType:UIButtonTypeCustom];
    btn.frame = CGRectMake(100, 100, 80, 30);
    [btn setTitle:@"Click" forState:UIControlStateNormal];
    [btn setTitleColor:UIColor.blueColor forState:UIControlStateNormal];
    [btn addTarget:self
                  action:@selector(clicked)
        forControlEvents:UIControlEventPrimaryActionTriggered];
    [self.view addSubview:btn];
}
@end

int main(int argc, char *argv[]) {
    NSString *appDelegateClassName;
    @autoreleasepool {
        appDelegateClassName = NSStringFromClass([AppDelegate class]);
    }
    return UIApplicationMain(argc, argv, nil, appDelegateClassName);
}

Simple enough, creates a view with a button which prints to the console when clicked. Note that Objective-C can seamlessly incorporate C++ code into what's called Objective-C++, and it requires changing the file extension from .m to .mm. This already speaks volumes about the flexibility and extendibility of Objective-C. Generally I find Objective-C++ is less verbose than Objective-C since it can benefit from modern C++ features like type inference:

UISomeBespokenlyLongTypeName *t = [UISomeBespokenlyLongTypeName new];
// becomes
auto t = [UISomeBespokenlyLongTypeName new];

Modern C++ also offers some other niceties like lambdas, namespaces, metaprogramming capabilities, optional, variant, containers and algorithms. Objective-C itself is considered verbose, espacially when compared to Swift. However, calling into the ObjC runtime from other languages, as we'll see, is even more verbose.

To get things working in pure C++, we need to include several headers, those of the Objective-C runtime and CoreFoundation and CoreGraphics. The Objective-C runtime headers provide several methods like objc_msgSend and others which allow us to create Objective-C classes and add/override methods etc.

// main.cpp
#include <CoreFoundation/CoreFoundation.h>
#include <CoreGraphics/CoreGraphics.h>
#define OBJC_OLD_DISPATCH_PROTOTYPES 1
#include <objc/objc.h>
#include <objc/runtime.h>
#include <objc/message.h>

extern "C" int UIApplicationMain(int, ...);

extern "C" void NSLog(objc_object *, ...);

BOOL didFinishLaunching(objc_object *self, SEL _cmd, void *application, void *options) {
    auto mainScreen = objc_msgSend((id)objc_getClass("UIScreen"), sel_getUid("mainScreen"));
    CGRect (*boundsFn)(id receiver, SEL operation);
    boundsFn = (CGRect(*)(id, SEL))objc_msgSend_stret;
    CGRect frame = boundsFn(mainScreen, sel_getUid("bounds"));
    auto win = objc_msgSend((id)objc_getClass("UIWindow"), sel_getUid("alloc"));
    win = objc_msgSend(win, sel_getUid("initWithFrame:"), frame);
    auto viewController = objc_msgSend((id)objc_getClass("ViewController"), sel_getUid("new"));
    objc_msgSend(win, sel_getUid("setRootViewController:"), viewController);
    objc_msgSend(win, sel_getUid("makeKeyAndVisible"));
    auto white = objc_msgSend((id)objc_getClass("UIColor"), sel_getUid("whiteColor"));
    objc_msgSend(win, sel_getUid("setBackgroundColor:"), white);
    object_setIvar(self, class_getInstanceVariable(objc_getClass("AppDelegate"), "window"), win);
    return YES;
}

void didLoad(objc_object *self, SEL _cmd) {
    objc_super _super = {
         .receiver = self,
         .super_class = objc_getClass("UIViewController"),
    };
    objc_msgSendSuper(&_super, sel_getUid("viewDidLoad"));
    auto btn = objc_msgSend((id)objc_getClass("UIButton"), sel_getUid("buttonWithType:"), 0);
    objc_msgSend(btn, sel_getUid("setFrame:"), CGRectMake(100, 100, 80, 30));
    auto title = objc_msgSend((id)objc_getClass("NSString"), sel_getUid("stringWithUTF8String:"), "Click");
    objc_msgSend(btn, sel_getUid("setTitle:forState:"), title, 0);
    auto blue = objc_msgSend((id)objc_getClass("UIColor"), sel_getUid("blueColor"));
    objc_msgSend(btn, sel_getUid("setTitleColor:forState:"), blue, 0);
    objc_msgSend(btn, sel_getUid("addTarget:action:forControlEvents:"), self, sel_getUid("clicked"), 1 << 13);
    auto view = objc_msgSend(self, sel_getUid("view"));
    objc_msgSend(view, sel_getUid("addSubview:"), btn);
}

void clicked(objc_object *self, SEL _cmd) {
    auto msg = objc_msgSend((id)objc_getClass("NSString"), sel_getUid("stringWithUTF8String:"), "clicked");
    NSLog(msg);
}

int main(int argc, char *argv[]) {
    auto AppDelegateClass = objc_allocateClassPair(objc_getClass("UIResponder"), "AppDelegate", 0);
    class_addIvar(AppDelegateClass, "window", sizeof(id), 0, "@");
    class_addMethod(AppDelegateClass, sel_getUid("application:didFinishLaunchingWithOptions:"), (IMP) didFinishLaunching, "i@:@@");
    objc_registerClassPair(AppDelegateClass);

    auto ViewControllerClass = objc_allocateClassPair(objc_getClass("UIViewController"), "ViewController", 0);
    class_addMethod(ViewControllerClass, sel_getUid("viewDidLoad"), (IMP) didLoad, "v@");
    class_addMethod(ViewControllerClass, sel_getUid("clicked"), (IMP) clicked, "v@");
    objc_registerClassPair(ViewControllerClass);
    
    auto frame = objc_msgSend((id)objc_getClass("UIScreen"), sel_getUid("mainScreen"));
    auto name = objc_msgSend((id)objc_getClass("NSString"), sel_getUid("stringWithUTF8String:"), "AppDelegate");
    id autoreleasePool = objc_msgSend((id)objc_getClass("NSAutoreleasePool"), sel_registerName("new"));
    UIApplicationMain(argc, argv, nil, name);
    objc_msgSend(autoreleasePool, sel_registerName("drain"));
}

You can create a new Objective-C project, delete all source files and replace them with this single C++ file, and XCode will happily build it and run the binary on your simulator. If you would like to compile this from the command-line:

clang++ -std=c++11 -arch x86_64 -isysroot $(xcrun --sdk iphonesimulator --show-sdk-path) main.cpp -fobjc-arc -lobjc -framework UIKit
# You can install it directly on an iOS simulator if you prepare an appropriate info.plist
xcrun simctl install booted path/to/bundle.app # assumes you have a simulator booted

You can also use CMake (with a toolchain file) with certain bundle info predefined in your CMakeLists.txt:

cmake_minimum_required(VERSION 3.14)
project(app)

set(CMAKE_EXPORT_COMPILE_COMMANDS ON)

set(MACOSX_BUNDLE_BUNDLE_NAME "Minimal Uikit Application")
set(MACOSX_BUNDLE_BUNDLE_VERSION 0.1.0)
set(MACOSX_BUNDLE_COPYRIGHT "Copyright © 2022 moalyousef.github.io. All rights reserved.")
set(MACOSX_BUNDLE_GUI_IDENTIFIER com.neurosrg.cpure)
set(MACOSX_BUNDLE_ICON_FILE app)
set(MACOSX_BUNDLE_LONG_VERSION_STRING 0.1.0)
set(MACOSX_BUNDLE_SHORT_VERSION_STRING 0.1)

add_executable(main main.cpp)
target_compile_features(main PUBLIC cxx_std_11)
target_link_libraries(main PUBLIC "-framework UIKit" "-framework CoreFoundation" "-framework Foundation" objc)

Then:

cmake -Bbin -GNinja -DPLATFORM=OS64COMBINED -DCMAKE_TOOLCHAIN_FILE=ios.toolchain.cmake # just to get the compile commands for clangd auto-completion on vscode
rm -rf bin
cmake -Bbin -GXcode -DPLATFORM=OS64COMBINED -DCMAKE_TOOLCHAIN_FILE=ios.toolchain.cmake 
cd bin
xcodebuild build -configuration Debug -sdk iphonesimulator -arch x86_64 CODE_SIGN_IDENTITY="" CODE_SIGNING_REQUIRED=NO
xcrun simctl install booted Debug-iphonesimulator/main.app

Returning to our C++ example, you can notice how the verbosity of Objective-C became 3-fold in the C++ example. This is also a stringly-typed api, meaning if you type the selector wrong, the compiler won't catch it, and you'll be hit with a runtime exception. You can argue that Objective-C also will gladly let you write whatever selector you want, and still throw. However, with tooling like XCode, this would be caught. You can even compile your code with -Wno-objc-method-access to catch such problems in Objective-C at compile time.

We'll now move to Rust, which is a relatively younger programming language. It's not an Apple officially-supported language, however, it has much better crossplatform support than for example Swift. And more importantly, it can target Apple's ObjcC runtime. To do that, we'll use the objc crate, which offers some convenient wrappers around the runtime functions:

extern crate objc; // remember to add it to your Cargo.toml!

use objc::declare::ClassDecl;
use objc::runtime::{Object, Sel, BOOL, YES};
use objc::{class, msg_send, sel, sel_impl};
use std::os::raw::c_char;
use std::ptr;

#[repr(C)]
struct Frame(pub f64, pub f64, pub f64, pub f64);

extern "C" fn did_finish_launching_with_options(
    obj: &mut Object,
    _: Sel,
    _: *mut Object,
    _: *mut Object,
) -> BOOL {
    unsafe {
        let frame: *mut Object = msg_send![class!(UIScreen), mainScreen];
        let frame: Frame = msg_send![frame, bounds];
        let win: *mut Object = msg_send![class!(UIWindow), alloc];
        let win: *mut Object = msg_send![win, initWithFrame: frame];
        let vc: *mut Object = msg_send![class!(ViewController), new];
        let _: () = msg_send![win, setRootViewController: vc];
        let _: () = msg_send![win, makeKeyAndVisible];
        let white: *mut Object = msg_send![class!(UIColor), whiteColor];
        let _: () = msg_send![win, setBackgroundColor: white];
        obj.set_ivar("window", win as usize);
    }
    YES
}

extern "C" fn did_load(obj: &mut Object, _: Sel) {
    unsafe {
        let _: () = msg_send![super(obj, class!(UIViewController)), viewDidLoad];
        let view: *mut Object = msg_send![&*obj, view];
        let btn: *mut Object = msg_send![class!(UIButton), buttonWithType:0];
        let _: () = msg_send![btn, setFrame:Frame(100., 100., 80., 30.)];
        let title: *mut Object = msg_send![class!(NSString), stringWithUTF8String:"Click\0".as_ptr()];
        let _: () = msg_send![btn, setTitle:title forState:0];
        let blue: *mut Object = msg_send![class!(UIColor), blueColor];
        let _: () = msg_send![btn, setTitleColor:blue forState:0];
        let _: () = msg_send![btn, addTarget:obj action:sel!(clicked) forControlEvents:1<<13];
        let _: () = msg_send![view, addSubview: btn];
    }
}

extern "C" fn clicked(_obj: &mut Object, _: Sel) {
    println!("clicked");
}

fn main() {
    unsafe {
        let ui_responder_cls = class!(UIResponder);
        let mut app_delegate_cls = ClassDecl::new("AppDelegate", ui_responder_cls).unwrap();

        app_delegate_cls.add_method(
            sel!(application:didFinishLaunchingWithOptions:),
            did_finish_launching_with_options
                as extern "C" fn(&mut Object, Sel, *mut Object, *mut Object) -> BOOL,
        );

        app_delegate_cls.add_ivar::<usize>("window");

        app_delegate_cls.register();

        let ui_view_controller_cls = class!(UIViewController);
        let mut view_controller_cls =
            ClassDecl::new("ViewController", ui_view_controller_cls).unwrap();

        view_controller_cls.add_method(
            sel!(viewDidLoad),
            did_load as extern "C" fn(&mut Object, Sel),
        );

        view_controller_cls.add_method(sel!(clicked), clicked as extern "C" fn(&mut Object, Sel));

        view_controller_cls.register();

        let name: *mut Object =
            msg_send![class!(NSString), stringWithUTF8String:"AppDelegate\0".as_ptr()];

        extern "C" {
            fn UIApplicationMain(
                argc: i32,
                argv: *mut *mut c_char,
                principalClass: *mut Object,
                delegateName: *mut Object,
            ) -> i32;
        }

        let autoreleasepool: *mut Object = msg_send![class!(NSAutoreleasePool), new];
        // Anything needing the autoreleasepool
        let _: () = msg_send![autoreleasepool, drain];

        UIApplicationMain(0, ptr::null_mut(), ptr::null_mut(), name);
    }
}

This can be built with cargo:

cargo build --target=x86_64-apple-ios # targetting a simulator
# you can move the generated binary to a prepared bundle folder with an appropriate info.plist

Similarly, you can use cargo-bundle, and define bundle metadata in your Cargo.toml:

[package.metadata.bundle]
name = "myapp"
identifier = "com.neurosrg.myapp"
category = "Education"
short_description = "A pure rust app"
long_description = "A pure rust app"

And with cargo-bundle:

cargo bundle --target x86_64-apple-ios
xcrun simctl install booted target/x86_64-apple-ios/debug/bundle/ios/pure.app

A nice thing about cargo-bundle, even though its iOS bundle support is experimental, is that it's still faster than xcodebuild!

Back to our Rust example, it's not as verbose as the pure C/C++ version, this is thanks to the objc crate doing a lot of the heavy lifting such as encoding and other niceties, it requires however explicit types when using the msg_send! macro. Also returning structs works with msg_send, whereas in C/C++, you'd want to use objc_msgSend_stret(). Although you don't see a lot of quotes like in the C++ version, it's still a stringly-typed api, also meaning a wrong typo won't be caught at compile time, instead it'll throw a runtime exception. One downside is that since Rust isn't an officially-supported language by Apple, projects like the objc crate and others wrapping other Apple frameworks are made by members of the Rust community, and can fall into issues like lack of maintainance (which appears to have happened to the objc crate).

Conclusion

C/C++/Rust can easily target the Objective-C runtime, less so when it comes to targetting Swift, but that's not as important. C/C++ have an extra advantage in that they're officially supported by Apple, for example you can create an XCode project and create both C/C++ source files and headers, and you'll get automatic integration in the build in addition to code completion etc. The system compiler on Apple is clang (AppleClang) which is a C/C++/ObjC/ObjCpp compiler. The default buildsystem, xcodebuild, supports creating universal binaries out of the box, so does CMake, the de-facto C++ buildsystem (which is actually unsupported by XCode, however it can generate xcodeproj files). Rust comes with its own buildsystem/package manager, Cargo. Although Cargo is great, like Rust, it's not directly supported in XCode. Also as the time of writing this, it can't generate MacOS or iOS bundles, nor can it produce universal binaries. Luckily, you can use other packages like cargo-bundle and cargo-lipo to create your bundles and universal libraries. Using the ObjC runtime functions like objc_msgSend/msg_send, apart from allow a develper to write in their preferred programming language, add no advantage whatsoever to the codebase. When it comes to api, it's exceedingly verbose, it's stringly-typed (like the JNI), most of it is unsafe to use (when it comes to Rust) that it's just more convenient to wrap it all in unsafe. Essentially, writing Objective-C/C++ can be the least painful path.

Rust vs C++ for frontend web (wasm) programming


Date: 2022-7-26

Several languages can now target wasm, I'll focus on Rust and C++ as these seem to have the most mature ecosystems, with C++'s Emscripten toolchain and Rust's wasm-bindgen (web-sys, js-sys etc) ecosystem. Keep in mind that both languages leverage LLVM's ability to generate wasm. Wasm itself has no direct access to the DOM, as such DOM calls pass through javascript.

Basically when using a language, you're buying into the ecosystem. You can still target Emscripten using Rust via the wasm32-unknown-emscripten target. However it would require that the LLVM version of Rust you're using and the LLVM version Emscripten is using are compatible. Similarly, you can invoke clang directly with the --target=wasm32 flag (requires wasm-ld and the std headers), and it should output wasm. However, the non-emscripten wasm ecosystem is barren!

Advantages of using C++:

  • Emscripten's headers are C/C++ headers.
  • Emscripten supports CMake (the de jour build system for C++, via both emcmake and a CMake toolchain file). However, the docs refer to raw calls of emcc/em++, which can be difficult to translate to proper CMake scripts:
add_executable(index src/main.cpp)
set_target_properties(index PROPERTIES SUFFIX .html LINK_FLAGS "-s WASM=1 -s EVAL_CTORS=2 --bind --shell-file ${CMAKE_CURRENT_LIST_DIR}/my_shell.html")
  • Emscripten provides Boost, SDL and OpenGL/WebGL support out of the box.
  • Emscripten translates OpenGL calls to WebGL.
  • vcpkg (a C/C++ package manager) supports building packages for emscripten.
  • Qt supports Emscripten (buggy).
  • Emscripten provides a virtual file system that simulates the local file system, std::filesystem works out of the box.
  • Emscripten supports multithreading.
  • The above means that an existing native game leveraging SDL +/- OpenGL can be recompiled using emscripten, with probably minor tweaks to the build script (and event-loop), and things should run.
  • Emscripten bundles the binaryen toolchain as well. For example, compiling with optimizations will automatically run wasm-opt.

Disadvantages of using C++:

  • Emscripten requires 800mb of install space. It bundles many tools which might be already installed (like nodejs). If installed in an unusual location, the install would likely be broken!
  • Using C++ outside of Emscripten to target wasm/web is complicated. It requires wasm-ld, the std/system headers (maintained in the Emscripten project) and writing the js glue manually.
  • Emscripten provides a WebIDL binder, however, bindings to the DOM api are not provided. It can be integrated into a build script, but in any case, it's not ergonomic to generate and use.

It makes targetting the DOM with Emscripten a bit of a chore:

#include <emscripten/val.h>

using emscripten::val;

int main() {
    auto doc = val::global("document");
    auto body = doc.call<val>("getElementsByTagName", val("body"))[0];
    auto btn = doc.call<val>("createElement", val("BUTTON"));
    body.call<void>("appendChild", btn);
    btn.set("textContent", "Click");
}

As you can probably guess, these DOM calls are stingly-typed and aren't checked at compile time, if you pass a wrong type or even a typo, it would error on runtime.

Advantages of using Rust:

  • Cargo is agnostic to the target. And installing the wasm32-unknown-unknown target is trivial.
  • Even without Emscripten, wasm-bindgen provides bindings to much of the DOM api and other javascript calls.
  • wasm-bindgen provides a cli tool which allows generating javascript glue code for loading into web and non-web apps, which can be easily installed using cargo install wasm-bindgen-cli.
  • The Rust ecosystem provides several tools like wasm-pack and trunk which automatically call wasm-bindgen-cli and create the necessary js and html files needed for web.
  • The above means that the calls are checked at compile time, and are easier to program against:
// The above code translated to Rust
use wasm_bindgen::prelude::*;

fn main() {
    let win = web_sys::window().unwrap();
    let doc = win.document().unwrap();
    let body = doc.body().unwrap();
    let btn = doc.create_element("BUTTON").unwrap();
    body.append_child(&elem).unwrap();
    btn.set_text_content(Some("Click"));
}

Disadvantages of using Rust:

  • The wasm32-unknown-unknown toolchain doesn't translate filesystem or threading calls. (except for the wasi target which translates std::fs calls into the platform equivalent calls, however, an app targetting wasi might not work in the browser).
  • The wasm32-unknown-unknown toolchain can optimize the output when building for release, but further optimization requires installing binaryen.
  • The wasm32-unknown-unknown toolchain doesn't translate OpenGL calls to webgl calls.
  • The wasm32-unknown-unknown toolchain doesn't support linking C/C++ libs built for wasm.
  • wasm-bindgen doesn't support the emscripten wasm target

Conclusion

Both Rust and C++ can target the browser and perform DOM calls. Rust provides a better api with web-sys. Emscripten's bind api is stringly-typed so can be a chore to program against. The wasm32-unknown-unknown target is better geared for DOM calls or graphics via the canvas api, while emscripten is better geared for apps targetting OpenGL/SDL (games). As for client-side computation, both targets can be used.

fltk-rs in 2022


Date: 2023-01-02

Looking back

Looking back into 2022, fltk-rs saw its 1.0 release in April 2022. On October 2022, the project finished its 3rd year. 2022 also saw the publication of the fltk-rs book. And a rewrite of fl2rust, which is a FLUID to Rust transpiler. Fluid is a RAD FLTK application which is similar to Gtk's glade and QtCreator.

Looking back further, fltk-rs was started for a specific requirement, to easily deploy statically-linked gui applications on Windows 7 PCs in my university hospital's simulation center, it also had to be crossplatform for those using mac laptops!

At the time, most pure Rust toolkits lacked many functionalities I needed (menus, tables, multiline text input, custom graph drawing, multiwindows ...etc), and I was just starting to use Rust so I felt incapable of contributing to the budding gui ecosystem. Gtk and Qt bindings existed at the time, but required dynamic linking. It was also during covid lockdown, so I had some extra time since teaching duties and elective cases decreased. So instead of just writing the project in another language, I started learning Rust and applying that knowledge into creating the bindings to FLTK.

That's to say that as a novice, I made many mistakes, most of which I consider fixed with the 1.0 release, however, there are some which were pointed out later, namely the timeout api, escpecially when it comes to cancellation. The older functions were deprecated, but the newer ones like app::add_timeout3 and app::remove_timeout3 stick out like a sore thumb.

Maybe releasing a 1.0 was a bit hasty. It taught me however more things to mitigate api breakage. Another aspect was targetting FLTK 1.4, which if you don't know is a yet to be released version of FLTK. That means it's a moving target. And even though FLTK is considered quite conservative when it comes to C++ codebases (it's still using C++98 and without the std library!), it's actively developend and several of the added functionality have changed their function signatures, which required some workarounds in fltk-rs. Some things were out of my hand, such as the upstream removal of the FLTK android driver since it was considered experimental and difficult to integrate on the C++ side, especially in preparation for the 1.4 release, so to avoid managing forks and such, it was subsequently removed from fltk-rs.

On the other hand, FLTK itself had nice improvements. Drawing on Linux/BSD now uses Cairo for anti-aliased drawing, and on Windows, it uses GDI+ to the same effect. A wayland backend was added which allows targetting wayland directly, i.e. not through xwayland. And the OpenGL backend was extended to allow drawing widgets using OpenGL. That means GlWindow can now display widgets, if you need hardware acceleration or need to display widgets on top of 3D graphics!

Looking forward

The current plan is that once FLTK 1.4 is shipped, to release the last version of fltk-rs version 1, and continue working on fltk-rs 2.0 (work on that has started in version2 branch in the fltk-rs repo). This would be using a 0.20.x version (if possible), until an FLTK 1.5 is released, and only then to release version 2.

I'm also planning to see if AccessKit can be retrofitted to fltk-rs, and maybe provide that functionality in a different crate. Even though FLTK handles keyboard navigation and input method editors, it still lacks in screen reader support.

I also plan to try out the newer Rust gui toolkits, since I feel far removed from where I once was. I've only tried egui in the past year and a half, and that was to add an fltk-rs integration to it.

I'm already excited to see the ecosystem maturing. If you frequent the Rust subreddit, you would notice a recurring question on what gui framework to use, and there would always be a few who would say that Rust isn't gui yet. Maybe not a few years back, but if you compare the situation to other programming languages (apart from C/C++), Rust already provides many gui crates that you can already use.

Fibonacci benchmarks between js, wasm and server


Date: 2023-11-15

Introduction

WebAssembly can't directly access the DOM, it has to call javascript and is known to incur a cost when doing so. What about raw computation, how does wasm compare to server-side computation or client-side javascript computation? And when is it favorable to use it?

The source code for the benchmark can be found here, along with instructions on how to build it.

Results

With an input value of 1:

  • servertime: 6.831298828125 ms
  • wasmtime: 0.008056640625 ms
  • jstime: 0.004150390625 ms

With an input value of 45:

  • servertime: 2983.470703125 ms
  • wasmtime: 8184.0751953125 ms
  • jstime: 15975.77490234375 ms

The results should appear in the browser's dev console.

This was run on a windows machine running wsl2 x86_64 GNU/Linux. Specs:

  • Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
  • Speed: 3.40 GHz
  • Cores: 4
  • Logical processors: 8
  • RAM: 16 GB
  • HDD: ST1000LM035-1RK172

Rust version: 1.71 stable. Google chrome: Version 119.0.6045.107 (Official Build) (64-bit)

wasm-opt -O3 didn't improve performance by much. It did reduce the generated wasm size by 30 percent however.

Conclusion

  • Performing server-side computations requires a network call and marshalling data to and from the server, which incurs an unnecessary cost when the computation is trivial. In such cases javascript and wasm offer a close enough computation cost. A wasm function call can be twice as slow as the javascript one when having to manipulate the DOM.
  • For intensive computations, the server cost can be considered negligible since native computation remains faster than both wasm and js. Even then, client-side javascript is only twice as slow as wasm!
  • Wasm on the browser, to me, makes sense when wanting to target the web using a different language than javascript. Although I'm no fan of js, js browser engines do a good job at optimizing it. However other languages do bring other advantages to the table, either in language merit or ecosystem. It also makes sense if you're serving static web pages or SPA's and not handling (or can't handle) post requrests, or want to reduce server computations or avoid network issues.

Forays into the Wasm Component Model


Date: 2025-10-05

The initial title of this post was "The wasm component model isn't real, it can't hurt you"! Along with an image I planned to add with terms like wasi, wit, wac, wkg, jco and a few other confusing wasm-related terms. Eventually reverted since the component model obviously exists, albeit support for it is still fragmentary. Browsers and javascript runtimes don't support it (yet) and it's still a Phase 1 proposal.

However, where wasi is concerned, the component model appears to be officially endorsed. The wasi-sdk (under the official WebAssembly org) will generate a wasm component when building a C/C++ binary targeting wasm32-wasip2. The Rust toolchain's wasm32-wasip2 target (experimental tier 2 since Nov 2024) will similarly generate a wasm component. The wasm-component-ld (wrapper around wasm-ld) is automatically run on the generated wasm core module (unless you opt out by passing the --skip-wit-component linker flag).

I had to recently port several libraries which supported freestanding wasm32, wasip1 and emscripten to support wasip2 as well. wasmbind and emlite-bind and their supporting libraries, wasmbind is a C++ library that's similar to js_sys and web_sys from Rust-land, it provides bindings to web API generated from WebIDL. The space in C++-land was lacking. emlite-bind is an equivalent Rust library, whereas js_sys and web_sys don't currently support wasi nor emscripten targets, emlite-bind attempts to fill that space. Carrying out the port without having understood the component model was a painful experience. So this post aims to shed light into what I learned in the process. I'll preface by saying that LLMs didn't help much. I'm guessing since it's all too new and there aren't many resources on the subject.

If you're new to the wasm ecosystem, you might be wondering how a wasm component differs from whatever was before it, which was a core module. You can read more about it here.Without going into much detail on the differences, core modules limited data exchange to basic types, namely integers and floats. If you needed to pass a string from a core module to javascript for example, you would pass the address (an integer) of that string in wasm's linear memory, that along with its length, unless nul-terminated in which case you would need to account for that. The component model aims at remedying this by allowing the exchange of higher level types (generic lists, variants, records, enums, strings etc) without concerning yourself with your wasm binary's memory or __indirect_function_table. As such these things are hidden from you, in exchange, you get higher level abstractions. You also no longer have to fiddle with a myriad of linker flags like --import-memory --export-memory --export-table --export-dynamic --export-if-defined=whatever. This is done by declaring your types and interfaces in WIT (Wasm Interface Type language) in wit files in your wit directory!

The idea is that higher-level wit interfaces will be distributed, devs will program against the APIs they declare, from any programming language which supports the component model (currently 8 languages), without meddling with low-level details or a C ABI. Components should give us better modularity, language interop and portability across languages and runtimes.

Before going into that, lets see how things worked prior to the component model.

Before components

Importing an extern function (from javascript)

Let's say you wanted to console.log a Rust string:

unsafe extern "C" {
    fn console_log_string(s: *const u8, len: usize);
}

fn main() {
    let s = "Hello, world!";
    unsafe {
        console_log_string(s.as_ptr(), s.len());
    }
}

On the javascript side, you would define console_log_string:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Document</title>
</head>
<body>
    <script>
        window.onload = async () => {
            const response = await fetch("./target/wasm32-unknown-unknown/debug/blog.wasm");
            const { instance } = await WebAssembly.instantiateStreaming(response, {
                env: {
                    console_log_string: (ptr, len) => {
                        const memory = new Uint8Array(instance.exports.memory.buffer, ptr, len);
                        // Typically you would instantiate your TextDecoder once instead of with every call
                        const string = new TextDecoder('utf-8').decode(memory);
                        console.log(string);
                    }
                }
            });
            instance.exports.main();
        };
    </script>
</body>
</html>

When targeting a javascript runtime you would just do away with the html part and window.onload.

The wasip1 model is practically the same, with a slight difference in that you would pass a wasi_snapshot_preview1 object alongside env. An npm library that I would recommend is @bjorn3/browser_wasi_shim:

    let wasi = new WASI([], [], []);
    const response = await fetch("./target/wasm32-wasip1/debug/blog.wasm");
    const { instance } = await WebAssembly.instantiateStreaming(response, {
        wasi_snapshot_preview1: wasi.wasiImport
        env: {
            console_log_string: (ptr, len) => {
                const memory = new Uint8Array(instance.exports.memory.buffer, ptr, len);
                const string = new TextDecoder('utf-8').decode(memory);
                console.log(string);
            }
        }
    });

Similarly with emscripten, you would typically define the function in your C/C++ source code using the EM_JS macro:

    EM_JS(void, console_log_string, (const char *ptr, size_t len), {
        const str = UTF8ToString(ptr, len);
        console.log(str);
    });

You can also pass the definition of console_log_string if you build with the shell option -sMODULARIZE:

    import initModule from "./bin/main.mjs";
    window.onload = async () => {
        const mymain = await initModule({
            console_log_string: /* definition goes here */
        });
    };

Exporting a function (to javascript)

Let's say we want to use a native function in our javascript. Before the component model, similarly to how we imported the function, we'll have to export native functions as extern "C" (or in the case of Zig, extern "env") function for it to be callable from javascript.

#![allow(unused)]
fn main() {
#[unsafe(no_mangle)]
extern "C" fn my_strlen(s: *const u8) -> usize {
    unsafe {
        let mut len = 0;
        while *s.add(len) != 0 {
            len += 1;
        }
        len
    }
}

#[unsafe(no_mangle)]
extern "C" fn greet(s: *const u8, len: usize) -> *const u8 {
    unsafe {
        let greeting = format!("Hello {}\0", 
            std::str::from_utf8(std::slice::from_raw_parts(s, len)).unwrap()
        );
        let ptr = std::alloc::alloc(std::alloc::Layout::from_size_align(greeting.len(), 1).unwrap());
        std::ptr::copy_nonoverlapping(greeting.as_ptr(), ptr, greeting.len());
        ptr
    }
}
}

In emscripten you would use the EMSCRIPTEN_KEEPALIVE macro along with specifying it as an extern "C" function.

Which can be used from js:

    const enc = new TextEncoder();
    const dec = new TextDecoder("utf-8");
    const txt = enc.encode("World!");
    // __rust_alloc & __rust_dealloc are automatically exported in wasm32 core module compiled by the rust toolchain
    const ptr = instance.exports.__rust_alloc(txt.length, 1);
    new Uint8Array(instance.exports.memory.buffer).set(txt, ptr);
    const msg = instance.exports.greet(ptr, txt.length);
    let len = instance.exports.my_strlen(msg);
    console.log(dec.decode(new Uint8Array(instance.exports.memory.buffer, msg, len)));
    instance.exports.__rust_dealloc(ptr, len, 1);

With components

Importing an extern function (from javascript)

Now when it comes to wasip2, unless you pass the --skip-wit-component flag to the linker (wasm-component-ld), you would end up with a wasm component. So how can we declare our console_log_string function for usage within Rust, and how can we define it in javascript. Well we will have to do it in WIT. Luckily for simple cases, you can use a macro wit_bindgen::generate! if you add wit-bindgen as a dependency to your project. And since we're building a runnable program, we'll also use the wasip2 crate:

[package]
name = "blog"
version = "0.1.0"
edition = "2024"

[lib]
crate-type = ["cdylib"]

[dependencies]
wit-bindgen = "0.44"
wasip2 = "1"

For C/C++ code, you would have to run wit-bindgen manually or as part of your build via CMake for example. It will generate a header, source file and an object file! These should be added to your build, unless you're creating a library, in which case the object file should be exposed as a target (in CMake parlance!), otherwise you risk losing it in the final link step. That's actually easier than telling your dependents to manually pass --whole-archive/--no-whole-archive to the linker. Exposing the object as a target can be easily done in CMake:

  # code from the library I'm working on!
  # can be used by consumers using target_link_libraries(myapp PRIVATE emcore::component_type)
  install(FILES ${CMAKE_CURRENT_LIST_DIR}/src/env_component_type.o
          DESTINATION ${CMAKE_INSTALL_LIBDIR})

  add_library(emcore_component_type INTERFACE)
  target_link_libraries(emcore_component_type INTERFACE
    "$<BUILD_INTERFACE:${CMAKE_CURRENT_LIST_DIR}/src/env_component_type.o>"
    "$<INSTALL_INTERFACE:${CMAKE_INSTALL_LIBDIR}/env_component_type.o>"
  )
  add_library(emcore::component_type ALIAS emcore_component_type)
  install(TARGETS emcore_component_type EXPORT emcoreTargets)

Back to Rust, notice in the above Cargo.toml how we change this from an executable binary to a cdylib. That's because runnable wasip2 components (or those using the -mexec-model=command in C/C++), would define a wasi:cli/run interface:

#![allow(unused)]
fn main() {
wit_bindgen::generate!({inline: "
package my:app@0.1.0;

interface logger {
    console-log-string: func(s: string);
}

world app {
    import logger;
}
"});

struct App;

impl wasip2::exports::cli::run::Guest for App {
    fn run() -> Result<(), ()> {
        crate::my::app::logger::console_log_string("Hello, world!");
        Ok(())
    }
}

wasip2::cli::command::export!(App);
}

Since browsers don't support wasip2 as of yet, we can use jco by the bytecodealliance org to generate the necessary core modules and javascript glue code. After installing jco, we run the transpile command:

npm i --save-dev @bytecodealliance/jco
npx jco transpile ./target/wasm32-wasip2/release/blog.wasm -O -o bin/app --instantiation async --no-nodejs-compat --tla-compat --no-typescript

The -O flag tells jco to optimize the generated wasm modules. This might not be necessary for Rust wasm components, however if you try to generate C/C++ wasm components in Release mode, you'll be hit with an error. Basically binaryen can't read the wasm component format:

[parse exception: this looks like a wasm component, which Binaryen does not support yet (see https://github.com/WebAssembly/binaryen/issues/6728) (at 0:8)]
Fatal: error parsing wasm (try --debug for more info)

More on that later!

The transpile step will generate a directory bin/app with the generated core wasm modules and js glue files. We can then instantiate the generated wasm modules, then pass our definition of console-log-string as part of my:app/logger interface:

import { WASIShim } from "@bytecodealliance/preview2-shim/instantiation";
// generated by jco
import { instantiate as initApp } from "../bin/app/blog.js";

async function main() {
  const getAppCore = async (p) => {
    const bytes = await fetch(
      new URL(`../bin/app/${p}`, import.meta.url)
    );
    return WebAssembly.compileStreaming(bytes);
  };

  const wasiShim = new WASIShim({});
  const wasi = wasiShim.getImportObject();

  const app = await initApp(getAppCore, {
    ...wasi,
    "my:app/logger": {
      consoleLogString: (s) => {
        console.log(s);
      },
    },
  });
  app.run.run();
}

await main();

The above code requires a bundler like webpack to resolve the node_modules paths etc.

Exporting a function (to javascript)

With the component model, we would simply define the function and export it:

#![allow(unused)]
fn main() {
wit_bindgen::generate!({inline: "
package my:app@0.1.0;

interface greeter {
    greet: func(s: string) -> string;
}

world app {
    export greeter;
}
"});

struct App;

impl crate::exports::my::app::greeter::Guest for App {
    fn greet(s: String) -> String {
        format!("Hello, {}!", s)
    }
}

export!(App);
}

And we can use it from our javascript:

import { WASIShim } from "@bytecodealliance/preview2-shim/instantiation";
// generated by jco
import { instantiate as initApp } from "../bin/app/blog.js";

async function main() {
  const getAppCore = async (p) => {
    const bytes = await fetch(
      new URL(`../bin/app/${p}`, import.meta.url)
    );
    return WebAssembly.compileStreaming(bytes);
  };

  const wasiShim = new WASIShim({});
  const wasi = wasiShim.getImportObject();

  const app = await initApp(getAppCore, {
    ...wasi,
  });
  console.log(app.greeter.greet("World!"));
}

await main();

Actually even the instantiation code is simpler since we don't import any javascript exports, but I went for the manual instiation code for symmetry with the previous section! For example if you don't pass --instantion:

# no --instantiation
npx jco transpile ./target/wasm32-wasip2/release/blog.wasm -O -o bin/app --no-nodejs-compat --tla-compat --no-typescript

You would load using:

import { $init, greeter } from "../bin/app/blog.js";

async function main() {
  await $init;
  console.log(greeter.greet("World!"));
}

await main();

Downsides

I like the idea behind the component model and would like for it to succeed. It would greatly simplify working with wasm. However it's not all moonlight and roses. Especially for those of us more interested in wasm in the browser.

  • Currently the binaries are larger when targeting wasip2, but that's irrespective of whether we're building a wasip2 core module or component.

  • It's unclear whether browsers will support the component model if and when wasi_snapshot_preview2 lands. That means we might still need the jco-transpile step for longer.

  • No centralised wit registry as of yet. wit files need to be vendored in a wit/deps directory or pulled via wkg from non-centralized registries.

  • Things still haven't settled so interfaces are prone to changes.

  • Outside the browser, support is fragmentary and lagging across most wasm runtimes. Currently only wasmtime supports it.

  • Adding wit interfaces might feel like effort duplication.

  • Tools galore! Working with components requires more tools than what you would typically require from your default toolchain:

    • wit-bindgen
    • jco
    • wasm-tools
    • wac
    • wkg
  • Ye olde tools don't work (well) with wasm components: Binaryen, (wasm)objdump, (wasm)strip.

  • Linking components isn't done with your usual linker, you can use wac to compose and plug components.

  • Some of the above mentioned tools are early in development and are not yet stable.

  • Debugging components can be a bit difficult. Lots of trampolines!!

Conclusion

I like the value proposition of the component model, however, things are still cooking. Starting out, you might run into a steep learning curve, mostly because it's different from what you might be used to. WIT isn't difficult to learn. The tooling in my opinion needs to become more streamlined and part of the toolchain. The bigger picture however, is that once the component model is widely supported, it should make wasm programming much easier since you're programming against higher level abstractions, while previously you had to deal with lower level C-like interfaces. The underlying language would be irrelevant to developers consuming those interfaces. WIT will become the new ABI.

FLTK on the Web


Date: 2025-10-09

This post is about getting FLTK to build for wasm and run in browsers! Source code can be found in a fork of FLTK under the emscripten branch.

If you try to git clone stock FLTK and build it using the Emscripten toolchain, you'll be met with a lot of errors. The first issue is FLTK's CMakeLists.txt. The build process conditionally includes the source files required for the target platform. In FLTK's case, when the build doesn't know what it's building for, it assumes Windows. So it will incorporate the Window's driver files which has win32 calls unidentified by the compiler, and the build fails.

Step 1: CMake

Extend the build to identify Emscripten as a platform, luckily CMake knows Emscripten, so that's as simple as adding:

elseif(WIN32)
# add window's driver files
elseif(EMSCRIPTEN)
# empty for now
endif()

Some extra changes were also need in the CMake scripts, since apparently if(UNIX) in CMake returns true on Emscripten. I also had to provide a definition for Window which points to the underlying handle. For example on Windows it's an HWND, on macOS it's an NSWindow, on X11 it's a Window. I settled on a typedef int Window in FL/platform.H. This makes the configure step and compilation to succeed. You can silence linker errors by passing the Emscripten shell option -sERROR_ON_UNDEFINED_SYMBOLS=0. Actually trying to get anything to show in the browser won't work since we now need to plug the windowing, drawing and event handling code to something browsers understand.

Step 2: Driver stubs

I had considered creating a null driver which can be used as a baseline for extra drivers to work. I've also suggested it in the fltk mailing list: https://groups.google.com/g/fltkcoredev/c/lXqDv3BUW9Q/m/eAJWBVisCQAJ

The issue however was that I found it was necessary to modify FLTK sources outside of drivers to support Emscripten. So even though I still find that it would be useful eventually if FLTK had a null driver which allows third party developers to create external drivers/backends, it would require more work than what I had set out to do.

So I shifted my focus to adding an Emscripten driver for FLTK. There are essential classes which are instantiated by FLTK which will handle windowing and graphics:

Fl_Screen_Driver *Fl_Screen_Driver::newScreenDriver() {
  return new Fl_Emscripten_Screen_Driver();
}

Fl_System_Driver *Fl_System_Driver::newSystemDriver() {
  return new Fl_Emscripten_System_Driver();
}

Fl_Image_Surface_Driver *Fl_Image_Surface_Driver::newImageSurfaceDriver(int w, int h, int highres,
                                                                        Fl_Offscreen off) {
  return new Fl_Emscripten_Image_Surface_Driver(w, h, highres, off);
}

Fl_Graphics_Driver *Fl_Graphics_Driver::newMainGraphicsDriver() {
  return new Fl_Emscripten_Graphics_Driver();
}

Fl_Window_Driver *Fl_Window_Driver::newWindowDriver(Fl_Window *w) {
  return new Fl_Emscripten_Window_Driver(w);
}

Fl_Copy_Surface_Driver *Fl_Copy_Surface_Driver::newCopySurfaceDriver(int w, int h) {
  return new Fl_Emscripten_Copy_Surface_Driver(w, h);
}

Basically you have to provide definitions to these class methods (declared in the respective headers). You do that by subclassing each driver and returning an instance of the subclass.

The classes also provide method declarations which need to be defined in the driver sources. Some are already defined in core FLTK, so you can rely on the default implementation or override them when necessary. So there were a lot of stubs in the beginning.

This step will actually build and link correctly, even though it wouldn't show anything in the browser.

Step 3: Actually deciding what's a Window

I knew from the beginning that I was going to use the canvas for drawing. But the question of "what's a window" in the browser had me stumped for some time. I decided that an FLTK window should map to an HTMLDivElement. It should contain decorations (borders with a title bar and at least a close button). The decorations will be another div, and the client area would be the canvas. The window's handle (the int from before), would be incorporated into the div element's id. In essence my Fl_Emscripten_Window_Driver::makeWindow() method would contain this abomination:

  EM_ASM(
      {
        let body = document.getElementsByTagName("body")[0];
        let div = document.createElement("DIV");
        div.id = "fltk_div" + $0;
        div.tabIndex = "-1";
        div.addEventListener("contextmenu", (e) => e.preventDefault());
        div.style.position = "absolute";
        div.style.left = $2 + "px";
        div.style.top = $1 ? ($3 - 30) + "px" : $3 + "px";
        div.style.zIndex = 1;
        div.style.backgroundColor = "#f1f1f1";
        div.style.borderRight = "1px solid #555";
        div.style.borderBottom = "1px solid #555";
        div.style.textAlign = "center";
        body.appendChild(div);
        let decor = document.createElement("DIV");
        decor.id = "fltk_decor" + $0;
        decor.style.height = "16px";
        decor.style.font = "14px Arial";
        decor.style.padding = "6px";
        decor.style.cursor = "move";
        decor.style.zIndex = 2;
        decor.style.backgroundColor = "#2196F3";
        decor.style.color = "#fff";
        decor.style.cursor = "pointer";
        div.appendChild(decor);
        let header = document.createElement("DIV");
        header.textContent = UTF8ToString($6);
        header.id = "fltk_decor_header" + $0;
        header.style.font = "14px Arial";
        decor.appendChild(header);
        let close = document.createElement("BUTTON");
        close.id = "closewin";
        close.textContent = "X";
        close.style.font = "bold 14px Arial";
        close.style.position = "absolute";
        close.style.top = "1%";
        close.style.right = "1px";
        close.style.backgroundColor = "#2196F3";
        close.style.border = "none";
        close.style.color = "#fff";
        close.addEventListener("click", () => div.hidden = true);
        decor.appendChild(close);
        let canvas = document.createElement("CANVAS");
        canvas.id = "fltk_canvas" + $0;
        canvas.setAttribute("data-raw-handle", $0.toString());
        canvas.tabIndex = "-1";
        canvas.width = $4;
        canvas.height = $5;
        div.appendChild(canvas);
        canvas.addEventListener("click", () => canvas.focus());
        decor.addEventListener("mousedown", () => canvas.focus());
        div.addEventListener(
            "focusin", () => { canvas.focus(); div.style.zIndex = 1; });
        div.addEventListener(
            "focusout", () => { canvas.blur(); div.style.zIndex = 0; });
        if ($1 === 0) decor.hidden = true;
        // https://www.w3schools.com/HOWTO/howto_js_draggable.asp
        function dragElement(elmnt) {
          var pos1 = 0;
          var pos2 = 0;
          var pos3 = 0;
          var pos4 = 0;
          if (document.getElementById("fltk_decor" + $0)) {
            document.getElementById("fltk_decor" + $0).onmousedown = dragMouseDown;
          } else {
            elmnt.onmousedown = dragMouseDown;
          }

          function dragMouseDown(e) {
            e = e || window.event;
            e.preventDefault();
            pos3 = e.clientX;
            pos4 = e.clientY;
            document.onmouseup = closeDragElement;
            document.onmousemove = elementDrag;
          }

          function elementDrag(e) {
            e = e || window.event;
            e.preventDefault();
            pos1 = pos3 - e.clientX;
            pos2 = pos4 - e.clientY;
            pos3 = e.clientX;
            pos4 = e.clientY;
            elmnt.style.left = (elmnt.offsetLeft - pos1) + "px";
            elmnt.style.top = (elmnt.offsetTop - pos2) + "px";
            _fltk_em_track_div($0, elmnt.offsetLeft|0, elmnt.offsetTop|0);
          }

          function closeDragElement() {
            document.onmouseup = null;
            document.onmousemove = null;
          }
        }

        dragElement(document.getElementById("fltk_div" + $0));
      },
      ID, pWindow->border(), pWindow->x(), pWindow->y(), 
      pWindow->w(), pWindow->h(), pWindow->label() ? pWindow->label(): "");

The above code should allow:

  • Allow setting a title on the title bar.
  • Dragging the div across the screen from its title bar, like you would any window on your desktop.
  • Allow Fl_Window::set_border(false) to hide the title bar.

There's a downside to this approach however. FLTK allows embedding windows, while in the browser, you can't embed another canvas or div inside a canvas! So that choice is still questionable!

Plugging the rest of the Fl_Window_Driver methods and wiring them to emscripten methods was quite easy. Mapping FLTK events to browser events, FLTK cursor types to browser cursor types, FLTK fonts to browser fonts was some busy work but not that difficult. Events required forwarding via Fl::handle to the window/div in which they occured.

An intrusive change to the FLTK sources outside of driver code:

int Fl::run() {
#ifndef __EMSCRIPTEN__
  while (Fl_X::first) wait(FOREVER);
#else
  emscripten_set_main_loop([]() { Fl::wait(); }, 0, true);
#endif
  return 0;
  return 0;
}

Browser environments operate on an event-driven model. An infinite while loop, which is common in native applications, would block the browser's main thread, causing the page to become unresponsive.

Step 4: What's a screen

Browsers offer a globalThis.screen, and by default a browser page has only one screen. so that might be a no brainer! However FLTK needs to know in which confines it's working. Some panels, the taskbar and browser menu actually part of the screen. I decided to go with availWidth and availHeight:

using namespace emscripten;
int Fl_Emscripten_Screen_Driver::w() {
  val screen = val::global("screen");
  return screen["availWidth"].as<int>();
}
int Fl_Emscripten_Screen_Driver::h() {
  val screen = val::global("screen");
  return screen["availHeight"].as<int>();
}

Other screen driver methods on the FLTK side included getting mouse coordinates, handling text composition since you need to translate key presses inside text-accepting widgets into actual text, handling any special keys pressed with character keys. The screen driver also covers handling cut/copy/paste which required me venturing into some new territories. I never had to deal with the browser's clipboard for example. Getting the clipboard text for example requires:

std::string clipText =
          emscripten::val::global("navigator")["clipboard"].call<val>("readText").await().as<std::string>();

Writing to the clipboard is also done via navigator.clipboard.writeText.

Similarly getting an image from the clipboard isn't trivial either:

EM_ASYNC_JS(EM_VAL, get_clipboard_image, (), {
  const itemList = await navigator.clipboard.read();
  let imageType;
  const item = itemList.find(item => item.types.some(type => type.startsWith('image/')));
  if (item) {
    const imageBlob = await item.getType(imageType = item.types.find(type => type.startsWith('image/')));
    const imageBitmap = await createImageBitmap(imageBlob);
    const canvas = new OffscreenCanvas(imageBitmap.width, imageBitmap.height);
    const context = canvas.getContext('2d');
    context.drawImage(imageBitmap, 0, 0);
    const imageData = context.getImageData(0, 0, canvas.width, canvas.height);
    return imageData;
  } else {
    return null;
  }
});

Step 5: Graphics

FLTK on linux/bsd uses Cairo for drawing (default on wayland, requires FLTK_GRAPHICS_CAIRO on X11). The browser's canvas api is similar enough to cairo's graphics api, so that was very helpful in translating graphics calls to canvas calls. For example compare the following:

void Fl_Cairo_Graphics_Driver::loop(int x0, int y0, int x1, int y1, int x2, int y2) {
  cairo_save(cairo_);
  cairo_new_path(cairo_);
  cairo_move_to(cairo_, x0, y0);
  cairo_line_to(cairo_, x1, y1);
  cairo_line_to(cairo_, x2, y2);
  cairo_close_path(cairo_);
  cairo_stroke(cairo_);
  cairo_restore(cairo_);
  surface_needs_commit();
}

to:

void Fl_Emscripten_Graphics_Driver::loop(int x0, int y0, int x1, int y1, int x2, int y2) {
  EM_ASM(
      {
        let ctx = Emval.toValue($0);
        ctx.save();
        ctx.beginPath();
        ctx.moveTo($1, $2);
        ctx.lineTo($3, $4);
        ctx.lineTo($5, $6);
        ctx.closePath();
        ctx.stroke();
        ctx.restore();
      },
      ctxt, x0, y0, x1, y1, x2, y2);
}

Drawing images was tricky since FLTK supports L8, LA8, RGB8 and RGBA8 formats, so FLTK images needed conversion to an RGBA8 format before sending them to the canvas for drawing.

Building this would finally show a window with whatever widget you put in it. The implementation was buggy from some mismatches in event handling but that was easily worked out since there was something I can actually test and see!

Adding offscreen support, FLTK supports offscreen drawing via Fl_Offscreen and the Image Surface driver, and browsers support an Offscreen canvas. This requires enabling the Emscripten flag OFFSCREENCANVAS_SUPPORT. Which if enabled, should work out of the box.

Step 6: Reworking events

It's not enough to translate a browser event to an FLTK event:

static int match_mouse_event(int eventType) {
  switch (eventType) {
    case EMSCRIPTEN_EVENT_MOUSEDOWN:
      return FL_PUSH;
    case EMSCRIPTEN_EVENT_MOUSEUP:
      return FL_RELEASE;
    case EMSCRIPTEN_EVENT_MOUSEMOVE:
      return FL_MOVE;
    case EMSCRIPTEN_EVENT_MOUSEENTER:
      return FL_ENTER;
    case EMSCRIPTEN_EVENT_MOUSELEAVE:
      return FL_LEAVE;
    case EMSCRIPTEN_EVENT_DBLCLICK:
      return FL_PUSH;
    case EMSCRIPTEN_EVENT_CLICK:
      return FL_PUSH;
    default:
      return 0;
  }
}

You would also need to handle state:

int flev = match_mouse_event(eventType);
  if (flev == FL_PUSH) {
    if (eventType == EMSCRIPTEN_EVENT_DBLCLICK)
      Fl::e_clicks = 1;
    else
      Fl::e_clicks = 0;
    Fl::e_is_click = 1;
    px = Fl::e_x_root;
    py = Fl::e_y_root;
    if (event->button == 0)
      state |= FL_BUTTON1;
    if (event->button == 1)
      state |= FL_BUTTON2;
    if (event->button == 2)
      state |= FL_BUTTON3;
    Fl::e_keysym = FL_Button + event->button + 1;
// same for other mouse events

Same for key presses! It also turns out that you have to unregister events from closed windows (divs)! This step was probably the least interesting to work on and took a disproportionatly long time.

Step 7: Font support

By default, FLTK has a set of 15 fonts which it supports automatically. Luckily these can be mapped to web-safe fonts which are available on all browsers. Some browsers (chrome and related) additionally provide a queryLocalFonts method which allows you to set a font from the host system:

  bool has_query_fonts_api = EM_ASM_INT({
    let has_api = false;
    if ("queryLocalFonts" in window) {
      navigator.permissions.query({ name: "local-fonts" }).then((result) => {
      if (result.state === "granted" || result.state === "prompt") {
        has_api = true;
      }});
    }
    return has_api;
  });
  // clang-format on
  if (!has_query_fonts_api) {
    Fl::set_font((Fl_Font)(FL_FREE_FONT + 1), name);
    built_in_table.push_back({name});
    return FL_FREE_FONT + 1;
  } else {
    val window = val::global("window");
    val availablefonts = window.call<val>("queryLocalFonts").await();
    std::vector<val> vec = vecFromJSArray<val>(availablefonts);
    int count = 0;
    for (const val &font : vec) {
      std::string familyname0 = font["family"].as<std::string>();
      int lfont = familyname0.size() + 2;
      const char *familyname = familyname0.c_str();
      char *fname = new char[lfont];
      snprintf(fname, lfont, " %s", familyname);
      char *regular = strdup(fname);
      Fl::set_font((Fl_Font)(count++ + FL_FREE_FONT), regular);
      built_in_table.push_back({regular});

      snprintf(fname, lfont, "B%s", familyname);
      char *bold = strdup(fname);
      Fl::set_font((Fl_Font)(count++ + FL_FREE_FONT), bold);
      built_in_table.push_back({bold});

      snprintf(fname, lfont, "I%s", familyname);
      char *italic = strdup(fname);
      Fl::set_font((Fl_Font)(count++ + FL_FREE_FONT), italic);
      built_in_table.push_back({italic});

      snprintf(fname, lfont, "P%s", familyname);
      char *bi = strdup(fname);
      Fl::set_font((Fl_Font)(count++ + FL_FREE_FONT), bi);
      // The returned fonts are already sorted.
      built_in_table.push_back({bi});
      delete[] fname;
    }
    return FL_FREE_FONT + count;
  }

Step 8: Supporting file dialogs

FLTK provides an Fl_Native_File_Chooser which wraps the native file picker on all of its backends. Browsers provide a File Sytem API, and some support extensions which support showOpenFilePicker, showSaveFilePicker and showDirectoryPicker, that's how some sites allow you to open a file dialog and upload a file for example. This has limited availability unfortunately. Firefox for example doesn't support showing dialogs via the above-mentioned api. You can use an HTMLInputElement and set the type to file (equivalent to <input type="file" />), but after experimenting with that I found it quite limited. No writable handles, no directory traversal and other minor ones! Where it's supported, spawning a File dialog from FLTK in the browser should work:

// This translates the chooser type to a browser picker. We have 3 main types:
// 1- showOpenFilePicker
// 2- showSaveFilePicker
// 3- showDirectoryPicker
// clang-format off
EM_ASYNC_JS(EM_VAL, showChooser, (int type, const char *filter, const char *dir, const char *preset), {
  if (!window.showOpenFilePicker) {
    return null;
  }
  let multiple = false;
  let files = false;
  let save = false;
  if (type === 0 || type === 2 || type === 4) {
    files = true;
  }
  if (type === 2 || type === 3) {
    mutliple = true;
  }
  if (type > 3) {
    save = true;
  }
  let func;
  if (files) {
    if (save) {
      func = window.showSaveFilePicker;
    } else {
      func = window.showOpenFilePicker;
    }
  } else {
    func = window.showDirectoryPicker;
  }
  let dir1 = dir ? UTF8ToString(dir) : 'desktop';
  let filt = UTF8ToString(filter).split(' ');
  // I use application/x-abiword since I don't think it's widely used as a mime type!
  const openPickerOpts = {
    types: [
      {
        accept: {
          "application/x-abiword": filt,
        },
      },
    ],
    startIn: dir1,
    excludeAcceptAllOption: true,
    multiple: multiple,
  };
  const savePickerOpts = {
    types: [
      {
        accept: {
          "application/x-abiword": filt,
        },
      },
    ],
    suggestedName: UTF8ToString(preset),
    startIn: dir1,
    excludeAcceptAllOption: true,
  };
  const directoryOpts = {
    mode: "readwrite",
    startIn: dir1,
  };
  if (files) {
    return Emval.toHandle(func(save ? savePickerOpts : openPickerOpts));
  } else {
    return Emval.toHandle(func(directoryOpts));
  }
});

Accessing the selected files is done via:

  • fl_read_to_string
  • fl_read_to_binary
  • fl_write_to_file

That's because the standard C/C++ file functions work only for Emscripten's virtual filesystem. The above functions utilise browser functions to carry reads and writes:

  // writing to a file using the browsers' file system api
  val data1 = val(typed_memory_view(len, data));
  val file = filehandles[idx];
  val writable = file.call<val>("createWritable").await();
  writable.call<val>("write", data1).await();
  writable.call<val>("close").await();

Step 9: Working around limitations

Mitigating the reentrancy issue

As mentioned previously, since browsers are event-driven, you should avoid long while loops since these would block your page. FLTK's menu windows would loop until something is selected. The blocking can be (partially) mitigated by using emscripten_sleep in the Fl_Emscripten_System_Driver::wait method and enabling Asyncify support in Emscripten. Full mitigation would require removing while loops in menu code but that would be too intrusive!

Virtual keyboard support for text accepting widgets on mobile

When you use a mobile browser and tap into an HTMLInputElement (input) or a textarea element, you automatically get a virtual keyboard that you can type into. However, an Fl_Input or Fl_Text_Editor aren't the above mentioned elements. They're drawings in the canvas. While on Android you can manually show the keyboard and that would work, on ios it's just not possible. Some browsers provide a Virtual Keyboard api, alas safari isn't one of them, nor is WebView on iOS, which is used by chrome and firefox on iOS. To actually support this, I would have to modify FLTK to make Fl_Input be backed by an input element, and Fl_Text_Editor be backed by a textarea element. The change was too intrusive so I dropped it. If eventually iOS browsers support the Virtual Keyboard API, I might implement this!

Step 10: Getting emscripten support for fltk-rs

All the work was done in a fork of FLTK under the emscripten branch. Adding support in fltk-rs would require cloning the fork and building it as part of the fltk-rs build. By default we build in single-threaded mode to avoid the extra policy permission required for SharedArrayBuffer. The only changes required apart from the build system is exposing file reads/writes for the File System API due to the same limitation for the standard file operations.

Conclusion

Overall, I wouldn’t recommend building a web UI that relies heavily on canvas-based rendering for its visuals and widgets. The limitations described in Step 9 are part of this. Additionally this approach sacrifices many of the benefits offered by native HTML, CSS, and JavaScript (accessibility and the sheer amount of man-years put into improving things). Development can also be cumbersome, as even small changes often require a full rebuild. Additionally, binary size becomes a concern—while a simple interface built with standard web technologies might only take a few kilobytes, FLTK wasm builds compiled with Emscripten can easily reach several hundred kilobytes, even with release builds, stripping and lto. Finally I can say, I learned a lot in the process, and I have 2 nice demos to show:

  • C++ demo https://moalyousef.github.io/fltk_emscripten/ (source code)
  • Rust demo https://moalyousef.github.io/fltk-rs-emscripten-example/ (source code)

Thank you for reading!