Home

This is my personal blog. It's powered by mdbook.

The source code can be found here.

Cargo as a tool to distribute C/C++ executables

Date: 2021-05-04

If you didn't know already, Cargo, Rust's package manager, can be used to install executable binaries via cargo-install. The downside is that they have to be Rust binaries. That makes sense since it does build binaries from source, and naturally Cargo knows how to build Rust projects!

However, this functionality is too good to be exclusive to Rust projects. We'll see how we can also use cargo-install to install C/C++ executables.

In short, this works by redefining the C/C++ executable's main() function.

The technique described here is useful when an application lacks an official package, when the official package is outdated or when you don't wish to conflict with an already installed official package.

A similar technique was used to wrap Fluid (FLTK’s gui designer). However with C++, the redefined main function needs also to be preceded by extern "C" to avoid name mangling.

Back to the topic. We'll start with a simpler project. We'll create our Rust project:

$ cargo new cbin
$ cd cbin

We'll create a C binary which takes command-line arguments:

$ touch src/main.c
$ cat > src/main.c << EOF
> #include <stdio.h>
> int main(int argc, char **argv) {
>   if (argc > 1)
>     printf("Hello %s\n", argv[1]);
>   return 0;
> }
> EOF

To build the C binary, we'll need a build script, and we can use the cc crate along with it to make our lives a bit easier!

# Cargo.toml
[build-dependencies]
cc = "1.0"

Our build.rs file (which we create at the root of our project) would look something like:

// build.rs
fn main() {
    cc::Build::new()
        .file("src/main.c")
        .define("main", "cbin_main") // notice our define here!
        .compile("cbin");
}

Check everything builds by running cargo build, you'll notice cargo built a static library libcbin.a (or cbin.lib if using the msvc toolchain) in the OUT_DIR.

Lets now wrap the library call cbin_main which we had redefined in our build. Our src/main.rs should look like:

// src/main.rs
use std::env;
use std::ffi::CString;
use std::os::raw::*;

extern "C" {
    pub fn cbin_main(argc: c_int, argv: *mut *mut c_char) -> c_int;
}

fn main() {
    let mut args: Vec<_> = env::args()
        .into_iter()
        .map(|s| CString::new(s).unwrap().into_raw())
        .collect();
    let _ret = unsafe { cbin_main(args.len() as i32, args.as_mut_ptr()) };
}

This basically takes all command-line args passed to the Rust binary and passes them to the C binary (that we turned into a library!).

You can check by running:

$ cargo run -- world

Now you can run cargo install --path . to install your C binary wrapper. Or you can publish your crate to crates.io and your crate

Interfacing with the Objective-C runtime

Date: 2022-07-18

I recently released a proof-of-concept library wrapping several native widgets on Android and iOS. It's written in C++, and I've also released Rust bindings to it. In my post on the Rust subreddit announcing the release, a fellow redditor validly remarked "I'm a little surprised you wrapped a floui-rs around the Floui C++ project rather than just writing rust and calling into objc or the jni". I wasn't satisfied with my succinct answer, but I thought a Reddit reply wouldn't provide enough context to many reading it. So I decided to write this post. Just a note before diving in, floui's iOS code implementation is in Objective-C++ and requires a #define FLOUI_IMPL macro in at least one Objective-C++ source file, the rest of the gui code can be written in cpp files or .mm files since the interface is in C++. Regarding the JNI part, it's equally painful to write in C++ or Rust. So no point in discussing that.

Most Apple frameworks expose an Objective-C api, except for some which expose a C++ (DriverKit) or a Swift api (StoreKit2). That means that Objective-C is Apple's system's language par excellence, and other languages will need to be able to interface with it for any functionality provided by Apple in its frameworks. Interfacing with Objective-C isn't straightforward, and apart from Swift (and only on Apple platforms, it can't interface with gnustep for example), no other language can directly interface with it. Luckily, the objc runtime offers C functions which allow other languages to interface with Objective-C frameworks. And any language that can call C, can -via the objc runtime- interface with Objective-C.

In this post, I'll show how this can be done using C++ and Rust. The C++ version can be modified to C by just replacing auto with a concrete type.

As an example, we'll be creating an iOS app purely in C++ and then in Rust. I say pure in that there's no visible Objective-C code as far as the developer is concerned, however, this still calls into the UIKit framework which is an ObjC framework. The app will be the equivalent of the following Objective-C app:

// main.m or main.mm
#import <UIKit/UIKit.h>

@interface AppDelegate : UIResponder <UIApplicationDelegate>
@property(strong, nonatomic) UIWindow *window;
@end

@interface ViewController :UIViewController
@end

@implementation AppDelegate
- (BOOL)application:(UIApplication *)application
    didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
    CGRect frame = [[UIScreen mainScreen] bounds];
    self.window = [[UIWindow alloc] initWithFrame:frame];
    [self.window setRootViewController:[ViewController new]];
    self.window.backgroundColor = UIColor.whiteColor;
    [self.window makeKeyAndVisible];
    return YES;
}
@end

@implementation ViewController
- (void)clicked {
    NSLog(@"clicked");
}
- (void)viewDidLoad {
    [super viewDidLoad];
    UIButton *btn = [UIButton buttonWithType:UIButtonTypeCustom];
    btn.frame = CGRectMake(100, 100, 80, 30);
    [btn setTitle:@"Click" forState:UIControlStateNormal];
    [btn setTitleColor:UIColor.blueColor forState:UIControlStateNormal];
    [btn addTarget:self
                  action:@selector(clicked)
        forControlEvents:UIControlEventPrimaryActionTriggered];
    [self.view addSubview:btn];
}
@end

int main(int argc, char *argv[]) {
    NSString *appDelegateClassName;
    @autoreleasepool {
        appDelegateClassName = NSStringFromClass([AppDelegate class]);
    }
    return UIApplicationMain(argc, argv, nil, appDelegateClassName);
}

Simple enough, creates a view with a button which prints to the console when clicked. Note that Objective-C can seamlessly incorporate C++ code into what's called Objective-C++, and it requires changing the file extension from .m to .mm. This already speaks volumes about the flexibility and extendibility of Objective-C. Generally I find Objective-C++ is less verbose than Objective-C since it can benefit from modern C++ features like type inference:

UISomeBespokenlyLongTypeName *t = [UISomeBespokenlyLongTypeName new];
// becomes
auto t = [UISomeBespokenlyLongTypeName new];

Modern C++ also offers some other niceties like lambdas, namespaces, metaprogramming capabilities, optional, variant, containers and algorithms. Objective-C itself is considered verbose, espacially when compared to Swift. However, calling into the ObjC runtime from other languages, as we'll see, is even more verbose.

To get things working in pure C++, we need to include several headers, those of the Objective-C runtime and CoreFoundation and CoreGraphics. The Objective-C runtime headers provide several methods like objc_msgSend and others which allow us to create Objective-C classes and add/override methods etc.

// main.cpp
#include <CoreFoundation/CoreFoundation.h>
#include <CoreGraphics/CoreGraphics.h>
#define OBJC_OLD_DISPATCH_PROTOTYPES 1
#include <objc/objc.h>
#include <objc/runtime.h>
#include <objc/message.h>

extern "C" int UIApplicationMain(int, ...);

extern "C" void NSLog(objc_object *, ...);

BOOL didFinishLaunching(objc_object *self, SEL _cmd, void *application, void *options) {
    auto mainScreen = objc_msgSend((id)objc_getClass("UIScreen"), sel_getUid("mainScreen"));
    CGRect (*boundsFn)(id receiver, SEL operation);
    boundsFn = (CGRect(*)(id, SEL))objc_msgSend_stret;
    CGRect frame = boundsFn(mainScreen, sel_getUid("bounds"));
    auto win = objc_msgSend((id)objc_getClass("UIWindow"), sel_getUid("alloc"));
    win = objc_msgSend(win, sel_getUid("initWithFrame:"), frame);
    auto viewController = objc_msgSend((id)objc_getClass("ViewController"), sel_getUid("new"));
    objc_msgSend(win, sel_getUid("setRootViewController:"), viewController);
    objc_msgSend(win, sel_getUid("makeKeyAndVisible"));
    auto white = objc_msgSend((id)objc_getClass("UIColor"), sel_getUid("whiteColor"));
    objc_msgSend(win, sel_getUid("setBackgroundColor:"), white);
    object_setIvar(self, class_getInstanceVariable(objc_getClass("AppDelegate"), "window"), win);
    return YES;
}

void didLoad(objc_object *self, SEL _cmd) {
    objc_super _super = {
         .receiver = self,
         .super_class = objc_getClass("UIViewController"),
    };
    objc_msgSendSuper(&_super, sel_getUid("viewDidLoad"));
    auto btn = objc_msgSend((id)objc_getClass("UIButton"), sel_getUid("buttonWithType:"), 0);
    objc_msgSend(btn, sel_getUid("setFrame:"), CGRectMake(100, 100, 80, 30));
    auto title = objc_msgSend((id)objc_getClass("NSString"), sel_getUid("stringWithUTF8String:"), "Click");
    objc_msgSend(btn, sel_getUid("setTitle:forState:"), title, 0);
    auto blue = objc_msgSend((id)objc_getClass("UIColor"), sel_getUid("blueColor"));
    objc_msgSend(btn, sel_getUid("setTitleColor:forState:"), blue, 0);
    objc_msgSend(btn, sel_getUid("addTarget:action:forControlEvents:"), self, sel_getUid("clicked"), 1 << 13);
    auto view = objc_msgSend(self, sel_getUid("view"));
    objc_msgSend(view, sel_getUid("addSubview:"), btn);
}

void clicked(objc_object *self, SEL _cmd) {
    auto msg = objc_msgSend((id)objc_getClass("NSString"), sel_getUid("stringWithUTF8String:"), "clicked");
    NSLog(msg);
}

int main(int argc, char *argv[]) {
    auto AppDelegateClass = objc_allocateClassPair(objc_getClass("UIResponder"), "AppDelegate", 0);
    class_addIvar(AppDelegateClass, "window", sizeof(id), 0, "@");
    class_addMethod(AppDelegateClass, sel_getUid("application:didFinishLaunchingWithOptions:"), (IMP) didFinishLaunching, "i@:@@");
    objc_registerClassPair(AppDelegateClass);

    auto ViewControllerClass = objc_allocateClassPair(objc_getClass("UIViewController"), "ViewController", 0);
    class_addMethod(ViewControllerClass, sel_getUid("viewDidLoad"), (IMP) didLoad, "v@");
    class_addMethod(ViewControllerClass, sel_getUid("clicked"), (IMP) clicked, "v@");
    objc_registerClassPair(ViewControllerClass);
    
    auto frame = objc_msgSend((id)objc_getClass("UIScreen"), sel_getUid("mainScreen"));
    auto name = objc_msgSend((id)objc_getClass("NSString"), sel_getUid("stringWithUTF8String:"), "AppDelegate");
    id autoreleasePool = objc_msgSend((id)objc_getClass("NSAutoreleasePool"), sel_registerName("new"));
    UIApplicationMain(argc, argv, nil, name);
    objc_msgSend(autoreleasePool, sel_registerName("drain"));
}

You can create a new Objective-C project, delete all source files and replace them with this single C++ file, and XCode will happily build it and run the binary on your simulator. If you would like to compile this from the command-line:

clang++ -std=c++11 -arch x86_64 -isysroot $(xcrun --sdk iphonesimulator --show-sdk-path) main.cpp -fobjc-arc -lobjc -framework UIKit
# You can install it directly on an iOS simulator if you prepare an appropriate info.plist
xcrun simctl install booted path/to/bundle.app # assumes you have a simulator booted

You can also use CMake (with a toolchain file) with certain bundle info predefined in your CMakeLists.txt:

cmake_minimum_required(VERSION 3.14)
project(app)

set(CMAKE_EXPORT_COMPILE_COMMANDS ON)

set(MACOSX_BUNDLE_BUNDLE_NAME "Minimal Uikit Application")
set(MACOSX_BUNDLE_BUNDLE_VERSION 0.1.0)
set(MACOSX_BUNDLE_COPYRIGHT "Copyright © 2022 moalyousef.github.io. All rights reserved.")
set(MACOSX_BUNDLE_GUI_IDENTIFIER com.neurosrg.cpure)
set(MACOSX_BUNDLE_ICON_FILE app)
set(MACOSX_BUNDLE_LONG_VERSION_STRING 0.1.0)
set(MACOSX_BUNDLE_SHORT_VERSION_STRING 0.1)

add_executable(main main.cpp)
target_compile_features(main PUBLIC cxx_std_11)
target_link_libraries(main PUBLIC "-framework UIKit" "-framework CoreFoundation" "-framework Foundation" objc)

Then:

cmake -Bbin -GNinja -DPLATFORM=OS64COMBINED -DCMAKE_TOOLCHAIN_FILE=ios.toolchain.cmake # just to get the compile commands for clangd auto-completion on vscode
rm -rf bin
cmake -Bbin -GXcode -DPLATFORM=OS64COMBINED -DCMAKE_TOOLCHAIN_FILE=ios.toolchain.cmake 
cd bin
xcodebuild build -configuration Debug -sdk iphonesimulator -arch x86_64 CODE_SIGN_IDENTITY="" CODE_SIGNING_REQUIRED=NO
xcrun simctl install booted Debug-iphonesimulator/main.app

Returning to our C++ example, you can notice how the verbosity of Objective-C became 3-fold in the C++ example. This is also a stringly-typed api, meaning if you type the selector wrong, the compiler won't catch it, and you'll be hit with a runtime exception. You can argue that Objective-C also will gladly let you write whatever selector you want, and still throw. However, with tooling like XCode, this would be caught. You can even compile your code with -Wno-objc-method-access to catch such problems in Objective-C at compile time.

We'll now move to Rust, which is a relatively younger programming language. It's not an Apple officially-supported language, however, it has much better crossplatform support than for example Swift. And more importantly, it can target Apple's ObjcC runtime. To do that, we'll use the objc crate, which offers some convenient wrappers around the runtime functions:

extern crate objc; // remember to add it to your Cargo.toml!

use objc::declare::ClassDecl;
use objc::runtime::{Object, Sel, BOOL, YES};
use objc::{class, msg_send, sel, sel_impl};
use std::os::raw::c_char;
use std::ptr;

#[repr(C)]
struct Frame(pub f64, pub f64, pub f64, pub f64);

extern "C" fn did_finish_launching_with_options(
    obj: &mut Object,
    _: Sel,
    _: *mut Object,
    _: *mut Object,
) -> BOOL {
    unsafe {
        let frame: *mut Object = msg_send![class!(UIScreen), mainScreen];
        let frame: Frame = msg_send![frame, bounds];
        let win: *mut Object = msg_send![class!(UIWindow), alloc];
        let win: *mut Object = msg_send![win, initWithFrame: frame];
        let vc: *mut Object = msg_send![class!(ViewController), new];
        let _: () = msg_send![win, setRootViewController: vc];
        let _: () = msg_send![win, makeKeyAndVisible];
        let white: *mut Object = msg_send![class!(UIColor), whiteColor];
        let _: () = msg_send![win, setBackgroundColor: white];
        obj.set_ivar("window", win as usize);
    }
    YES
}

extern "C" fn did_load(obj: &mut Object, _: Sel) {
    unsafe {
        let _: () = msg_send![super(obj, class!(UIViewController)), viewDidLoad];
        let view: *mut Object = msg_send![&*obj, view];
        let btn: *mut Object = msg_send![class!(UIButton), buttonWithType:0];
        let _: () = msg_send![btn, setFrame:Frame(100., 100., 80., 30.)];
        let title: *mut Object = msg_send![class!(NSString), stringWithUTF8String:"Click\0".as_ptr()];
        let _: () = msg_send![btn, setTitle:title forState:0];
        let blue: *mut Object = msg_send![class!(UIColor), blueColor];
        let _: () = msg_send![btn, setTitleColor:blue forState:0];
        let _: () = msg_send![btn, addTarget:obj action:sel!(clicked) forControlEvents:1<<13];
        let _: () = msg_send![view, addSubview: btn];
    }
}

extern "C" fn clicked(_obj: &mut Object, _: Sel) {
    println!("clicked");
}

fn main() {
    unsafe {
        let ui_responder_cls = class!(UIResponder);
        let mut app_delegate_cls = ClassDecl::new("AppDelegate", ui_responder_cls).unwrap();

        app_delegate_cls.add_method(
            sel!(application:didFinishLaunchingWithOptions:),
            did_finish_launching_with_options
                as extern "C" fn(&mut Object, Sel, *mut Object, *mut Object) -> BOOL,
        );

        app_delegate_cls.add_ivar::<usize>("window");

        app_delegate_cls.register();

        let ui_view_controller_cls = class!(UIViewController);
        let mut view_controller_cls =
            ClassDecl::new("ViewController", ui_view_controller_cls).unwrap();

        view_controller_cls.add_method(
            sel!(viewDidLoad),
            did_load as extern "C" fn(&mut Object, Sel),
        );

        view_controller_cls.add_method(sel!(clicked), clicked as extern "C" fn(&mut Object, Sel));

        view_controller_cls.register();

        let name: *mut Object =
            msg_send![class!(NSString), stringWithUTF8String:"AppDelegate\0".as_ptr()];

        extern "C" {
            fn UIApplicationMain(
                argc: i32,
                argv: *mut *mut c_char,
                principalClass: *mut Object,
                delegateName: *mut Object,
            ) -> i32;
        }

        let autoreleasepool: *mut Object = msg_send![class!(NSAutoreleasePool), new];
        // Anything needing the autoreleasepool
        let _: () = msg_send![autoreleasepool, drain];

        UIApplicationMain(0, ptr::null_mut(), ptr::null_mut(), name);
    }
}

This can be built with cargo:

cargo build --target=x86_64-apple-ios # targetting a simulator
# you can move the generated binary to a prepared bundle folder with an appropriate info.plist

Similarly, you can use cargo-bundle, and define bundle metadata in your Cargo.toml:

[package.metadata.bundle]
name = "myapp"
identifier = "com.neurosrg.myapp"
category = "Education"
short_description = "A pure rust app"
long_description = "A pure rust app"

And with cargo-bundle:

cargo bundle --target x86_64-apple-ios
xcrun simctl install booted target/x86_64-apple-ios/debug/bundle/ios/pure.app

A nice thing about cargo-bundle, even though its iOS bundle support is experimental, is that it's still faster than xcodebuild!

Back to our Rust example, it's not as verbose as the pure C/C++ version, this is thanks to the objc crate doing a lot of the heavy lifting such as encoding and other niceties, it requires however explicit types when using the msg_send! macro. Also returning structs works with msg_send, whereas in C/C++, you'd want to use objc_msgSend_stret(). Although you don't see a lot of quotes like in the C++ version, it's still a stringly-typed api, also meaning a wrong typo won't be caught at compile time, instead it'll throw a runtime exception. One downside is that since Rust isn't an officially-supported language by Apple, projects like the objc crate and others wrapping other Apple frameworks are made by members of the Rust community, and can fall into issues like lack of maintainance (which appears to have happened to the objc crate).

Conclusion

C/C++/Rust can easily target the Objective-C runtime, less so when it comes to targetting Swift, but that's not as important. C/C++ have an extra advantage in that they're officially supported by Apple, for example you can create an XCode project and create both C/C++ source files and headers, and you'll get automatic integration in the build in addition to code completion etc. The system compiler on Apple is clang (AppleClang) which is a C/C++/ObjC/ObjCpp compiler. The default buildsystem, xcodebuild, supports creating universal binaries out of the box, so does CMake, the de-facto C++ buildsystem (which is actually unsupported by XCode, however it can generate xcodeproj files). Rust comes with its own buildsystem/package manager, Cargo. Although Cargo is great, like Rust, it's not directly supported in XCode. Also as the time of writing this, it can't generate MacOS or iOS bundles, nor can it produce universal binaries. Luckily, you can use other packages like cargo-bundle and cargo-lipo to create your bundles and universal libraries. Using the ObjC runtime functions like objc_msgSend/msg_send, apart from allow a develper to write in their preferred programming language, add no advantage whatsoever to the codebase. When it comes to api, it's exceedingly verbose, it's stringly-typed (like the JNI), most of it is unsafe to use (when it comes to Rust) that it's just more convenient to wrap it all in unsafe. Essentially, writing Objective-C/C++ can be the least painful path.

Rust vs C++ for frontend web (wasm) programming

Date: 2022-7-26

Several languages can now target wasm, I'll focus on Rust and C++ as these seem to have the most mature ecosystems, with C++'s Emscripten toolchain and Rust's wasm-bindgen (web-sys, js-sys etc) ecosystem. Keep in mind that both languages leverage LLVM's ability to generate wasm. Wasm itself has no direct access to the DOM, as such DOM calls pass through javascript.

Basically when using a language, you're buying into the ecosystem. You can still target Emscripten using Rust via the wasm32-unknown-emscripten target. However it would require that the LLVM version of Rust you're using and the LLVM version Emscripten is using are compatible. Similarly, you can invoke clang directly with the --target=wasm32 flag (requires wasm-ld and the std headers), and it should output wasm. However, the non-emscripten wasm ecosystem is barren!

Advantages of using C++:

  • Emscripten's headers are C/C++ headers.
  • Emscripten supports CMake (the de jour build system for C++, via both emcmake and a CMake toolchain file). However, the docs refer to raw calls of emcc/em++, which can be difficult to translate to proper CMake scripts:
add_executable(index src/main.cpp)
set_target_properties(index PROPERTIES SUFFIX .html LINK_FLAGS "-s WASM=1 -s EVAL_CTORS=2 --bind --shell-file ${CMAKE_CURRENT_LIST_DIR}/my_shell.html")
  • Emscripten provides Boost, SDL and OpenGL/WebGL support out of the box.
  • Emscripten translates OpenGL calls to WebGL.
  • vcpkg (a C/C++ package manager) supports building packages for emscripten.
  • Qt supports Emscripten (buggy).
  • Emscripten provides a virtual file system that simulates the local file system, std::filesystem works out of the box.
  • Emscripten supports multithreading.
  • The above means that an existing native game leveraging SDL +/- OpenGL can be recompiled using emscripten, with probably minor tweaks to the build script (and event-loop), and things should run.
  • Emscripten bundles the binaryen toolchain as well. For example, compiling with optimizations will automatically run wasm-opt.

Disadvantages of using C++:

  • Emscripten requires 800mb of install space. It bundles many tools which might be already installed (like nodejs). If installed in an unusual location, the install would likely be broken!
  • Using C++ outside of Emscripten to target wasm/web is complicated. It requires wasm-ld, the std/system headers (maintained in the Emscripten project) and writing the js glue manually.
  • Emscripten provides a WebIDL binder, however, bindings to the DOM api are not provided. It can be integrated into a build script, but in any case, it's not ergonomic to generate and use.

It makes targetting the DOM with Emscripten a bit of a chore:

#include <emscripten/val.h>

using emscripten::val;

int main() {
    auto doc = val::global("document");
    auto body = doc.call<val>("getElementsByTagName", val("body"))[0];
    auto btn = doc.call<val>("createElement", val("BUTTON"));
    body.call<void>("appendChild", btn);
    btn.set("textContent", "Click");
}

As you can probably guess, these DOM calls are stingly-typed and aren't checked at compile time, if you pass a wrong type or even a typo, it would error on runtime.

Advantages of using Rust:

  • Cargo is agnostic to the target. And installing the wasm32-unknown-unknown target is trivial.
  • Even without Emscripten, wasm-bindgen provides bindings to much of the DOM api and other javascript calls.
  • wasm-bindgen provides a cli tool which allows generating javascript glue code for loading into web and non-web apps, which can be easily installed using cargo install wasm-bindgen-cli.
  • The Rust ecosystem provides several tools like wasm-pack and trunk which automatically call wasm-bindgen-cli and create the necessary js and html files needed for web.
  • The above means that the calls are checked at compile time, and are easier to program against:
// The above code translated to Rust
use wasm_bindgen::prelude::*;

fn main() {
    let win = web_sys::window().unwrap();
    let doc = win.document().unwrap();
    let body = doc.body().unwrap();
    let btn = doc.create_element("BUTTON").unwrap();
    body.append_child(&elem).unwrap();
    btn.set_text_content(Some("Click"));
}

Disadvantages of using Rust:

  • The wasm32-unknown-unknown toolchain doesn't translate filesystem or threading calls. (except for the wasi target which translates std::fs calls into the platform equivalent calls, however, an app targetting wasi might not work in the browser).
  • The wasm32-unknown-unknown toolchain can optimize the output when building for release, but further optimization requires installing binaryen.
  • The wasm32-unknown-unknown toolchain doesn't translate OpenGL calls to webgl calls.
  • The wasm32-unknown-unknown toolchain doesn't support linking C/C++ libs built for wasm.
  • wasm-bindgen doesn't support the emscripten wasm target

Conclusion

Both Rust and C++ can target the browser and perform DOM calls. Rust provides a better api with web-sys. Emscripten's bind api is stringly-typed so can be a chore to program against. The wasm32-unknown-unknown target is better geared for DOM calls or graphics via the canvas api, while emscripten is better geared for apps targetting OpenGL/SDL (games). As for client-side computation, both targets can be used.

fltk-rs in 2022

Date: 2023-01-02

Looking back

Looking back into 2022, fltk-rs saw its 1.0 release in April 2022. On October 2022, the project finished its 3rd year. 2022 also saw the publication of the fltk-rs book. And a rewrite of fl2rust, which is a FLUID to Rust transpiler. Fluid is a RAD FLTK application which is similar to Gtk's glade and QtCreator.

Looking back further, fltk-rs was started for a specific requirement, to easily deploy statically-linked gui applications on Windows 7 PCs in my university hospital's simulation center, it also had to be crossplatform for those using mac laptops!

At the time, most pure Rust toolkits lacked many functionalities I needed (menus, tables, multiline text input, custom graph drawing, multiwindows ...etc), and I was just starting to use Rust so I felt incapable of contributing to the budding gui ecosystem. Gtk and Qt bindings existed at the time, but required dynamic linking. It was also during covid lockdown, so I had some extra time since teaching duties and elective cases decreased. So instead of just writing the project in another language, I started learning Rust and applying that knowledge into creating the bindings to FLTK.

That's to say that as a novice, I made many mistakes, most of which I consider fixed with the 1.0 release, however, there are some which were pointed out later, namely the timeout api, escpecially when it comes to cancellation. The older functions were deprecated, but the newer ones like app::add_timeout3 and app::remove_timeout3 stick out like a sore thumb.

Maybe releasing a 1.0 was a bit hasty. It taught me however more things to mitigate api breakage. Another aspect was targetting FLTK 1.4, which if you don't know is a yet to be released version of FLTK. That means it's a moving target. And even though FLTK is considered quite conservative when it comes to C++ codebases (it's still using C++98 and without the std library!), it's actively developend and several of the added functionality have changed their function signatures, which required some workarounds in fltk-rs. Some things were out of my hand, such as the upstream removal of the FLTK android driver since it was considered experimental and difficult to integrate on the C++ side, especially in preparation for the 1.4 release, so to avoid managing forks and such, it was subsequently removed from fltk-rs.

On the other hand, FLTK itself had nice improvements. Drawing on Linux/BSD now uses Cairo for anti-aliased drawing, and on Windows, it uses GDI+ to the same effect. A wayland backend was added which allows targetting wayland directly, i.e. not through xwayland. And the OpenGL backend was extended to allow drawing widgets using OpenGL. That means GlWindow can now display widgets, if you need hardware acceleration or need to display widgets on top of 3D graphics!

Looking forward

The current plan is that once FLTK 1.4 is shipped, to release the last version of fltk-rs version 1, and continue working on fltk-rs 2.0 (work on that has started in version2 branch in the fltk-rs repo). This would be using a 0.20.x version (if possible), until an FLTK 1.5 is released, and only then to release version 2.

I'm also planning to see if AccessKit can be retrofitted to fltk-rs, and maybe provide that functionality in a different crate. Even though FLTK handles keyboard navigation and input method editors, it still lacks in screen reader support.

I also plan to try out the newer Rust gui toolkits, since I feel far removed from where I once was. I've only tried egui in the past year and a half, and that was to add an fltk-rs integration to it.

I'm already excited to see the ecosystem maturing. If you frequent the Rust subreddit, you would notice a recurring question on what gui framework to use, and there would always be a few who would say that Rust isn't gui yet. Maybe not a few years back, but if you compare the situation to other programming languages (apart from C/C++), Rust already provides many gui crates that you can already use.

Fibonacci benchmarks between js, wasm and server

Introduction

WebAssembly can't directly access the DOM, it has to call javascript to do so and is known to incur a cost when manipulating the DOM. How about raw computation, how does wasm compare to server-side computation or client-side javascript computation?

The source code for the benchmark can be found in https://github.com/MoAlyousef/fib-bench, along with instructions on how to build it.

Results

With an input value of 1:

  • servertime: 6.831298828125 ms
  • wasmtime: 0.008056640625 ms
  • jstime: 0.004150390625 ms

With an input value of 45:

  • servertime: 2983.470703125 ms
  • wasmtime: 8184.0751953125 ms
  • jstime: 15975.77490234375 ms

The results should appear in the browser's dev console.

This was run on a windows machine running wsl2 x86_64 GNU/Linux. Specs:

  • Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
  • Speed: 3.40 GHz
  • Cores: 4
  • Logical processors: 8
  • RAM: 16 GB
  • HDD: ST1000LM035-1RK172

Rust version: 1.71 stable. Google chrome: Version 119.0.6045.107 (Official Build) (64-bit)

Conclusion

  • wasm-opt -O3 didn't improve performance by much. It did reduce the generated wasm size by 30 percent however.
  • Calling server-side computations requires a network call and marshalling data to and from the server, which incurs an unnecessary cost when the computation is trivial. In such cases javascript and wasm offer close enough computation cost. A wasm function call cost, even though minor, can be twice as slow as the javascript one.
  • For intensive computations, the server cost can be considered negligible since native computation remains faster than wasm. Even then, client-side javascript is only twice as slow as wasm!
  • Wasm on the browser, to me, makes sense when wanting to target the web using a different language than javascript. Although I'm no fan of js, js browser engines do a good job at optimizing it. However other languages bring other advantages to the table, either in language merit or ecosystem, and with wasm, these can be used in browsers. It also makes sense if you're serving static web pages and not handling (or can't handle) post requrests, or want to reduce server computations.