kbrecordzz
Art & entertainment
Home - About - Contact - Overview


Writing web code in a normal way

by kbrecordzz November 11, 2024 Internet, Technology, programming

I just read something about the latest web programming trend "#nobuild", which means: Writing code in a normal way in order to make it run normally in a web browser. It never ceases to amaze me how people can believe something that has existed since the web started - no, something that is the foundation the web is built on - is something completely new. It's obvious they don't know how the web and web browsers work. So I decided to publish this draft, a text about going through the web browser as smoothly as possible to get the most hardware power for your web program, that's been lying around since August:

--------

Today, the web browser isn't just an app that shows documents on the internet, it's an operating system that you can make programs and games for. However, programming for the web is NOT a deterministic art like programming for an old game console where you know all the circumstances. No, the web browser is a result of 30 years of complexity that needs to be backwards-compatible with each other. Add to that all the different web browsers that try (and sometimes don't try) to show you the same thing, and different devices of a wide variety of power and possibilities. It's an uncertain art. However, we can take a couple of things for granted and expect them to be in all web browsers. If we know which these are, we can then assume nothing of all other variables.

To make programs and games for the web, we want to utilize the user's device's hardware through the web browser, mainly using the languages HTML and Javascript (and WebGL if we want 3D graphics, which I want). Games and programs are more Javascript-heavy than HTML-heavy, so I'll focus on Javascript here. Why do I prefer Javascript and WebGL over WebAssembly and WebGPU, which are newer and claim to be more performant? Because, in my opinion, while we want performance, the internet is even more about wide accessibility than about performance. Javascript has been around for 20 years and will continue to be around in the future. And if WebGL continues to be widely used for a few more years, it will become the same kind of eternal standard for 3D. This is the reality of what languages, code and formats work on the internet, regardless of what standards and people say should work. There's too much stuff out there for browsers to just break it all.

Let's start with the first important second of your Javascript program. The bottlenecks slowing down your program here is the download speed and the "parsing" and bytecode compiling of your Javascript code. If your files are smaller, they'll get downloaded more quickly (obviously). Parsing is then a step that converts your Javascript code into something more machine-fitting, and this is done by going through the code one step at a time. So how long this step takes depends partly on the code length and partly on how complex the code is (how many concepts there are). An example: The code "var x = 0.300+0.600;" is longer than "var x = 0.3+0.6" in number of characters, but "var x = 0.900" has fewer parts conceptually than both of them, because we removed an addition (it also happen to be shorter). Less code means less stuff for the parser to go through. And because everything in your code will be handled by the browser in some way, less complex code (fewer parts) means less things for the browser to do in general. Making less of everything can never have an unexpected downside, and that's why this is one of few things we can rely on in web programming. Everything else can change depending on the browser, so trying to hyper-optimize to make something work in a 2024 version of Google Chrome could, and will, backfire at some time.

The product of these steps is "bytecode". If the code is made of fewer parts it will also probably turn into fewer bytecode instructions, which generally will make the program faster to execute in the "bytecode interpreter": Javascript being a bytecode-ran language means that your program won't (or maybe it will, see later) run as machine code directly on the CPU. It's the web browser executable's machine code that runs on your CPU, which itself has a loop thats checks the current "bytecode" for your Javascript application and then runs the machine code corresponding to that bytecode. So, if you visualize the web browser's "bytecode interpreter loop" code being wrapped around your actual code, it's easy to see how you're never, or rarely, able to reach the device's full potential with your Javascript code. Knowing this, it may be smart to use built-in Javascript functions (like Math.abs() and similar) instead of writing your own code in some cases, because these often have predefined machine code which lets you skip these bytecode conversion and interpretation steps. Apart from built-in functions, writing your own code is almost always better than using a library, because libraries tend to be generalized for many purposes and therefore have much more code than you need for your specific purpose. And that's only talking about the library functions you actually use. All the functions you don't use will also get downloaded, and at least some computer power will have to get used to ignore them, no matter how efficient it's done.

If you make 3D graphics, you'll have real access to the GPU with WebGL in a way you don't have CPU access with Javascript. Here the browser isn't the biggest obstacle. The biggest bottlenecks here are sending the 3D coordinate data to the GPU (the fewer the "draw calls", the better), the number of rendered pixels, per-pixel effects like anti-aliasing, etc. The number of triangles/coordinates may also affect, but probably not as much as the per-pixel job and the data sending. Fabian Giesen has written well about this.

What I described so far about the web browser is mostly about how it works in the beginning, before it's kicked in its optimizing. You can read about "JIT compiling" on the web - this WILL let you utilize the device and run real machine code on it - but since I think it's more interesting to get good performance in the worst conditions (lag is annoying even if it almost never happens, and going easy on the hardware can, at least for phones, drain less battery from the device), I've focused on the web browser's simplest execution form. You might also be interested to read about "garbage collection". Also worth to know is that the web browser doesn't have true access to your device either, it's only one of many processes running on your operating system, which has more real access to the hardware. The slice left over for your web application isn't big!

--------

Note that I didn't talk about "optimization techniques" or anything else advanced here. When you know how web browsers work, making high-performance web programs and games doesn't require anything else than being resourceful with Javascript and WebGL code and using simple standard functions that's been around for a long time (read my next post in this "series" for more about such browser features). Going easy on all parts of the hardware instead of squeezing the last drop out of it. This also lets you remove cross-browser problems, backwards- and forwards-compatibility problems, and extensive device testing, because simple timeless code works everywhere without ever stopping. Your program will be so lightweight and purposeful that all those add-on libraries only seem to complicate things. And without those extra 500kb of libraries (of which you only use a few kilobytes anyway), the minifying, transpiling and bundling (I still don't know what these really mean) to optimize the code isn't needed anymore. That's what people have realized now with "#nobuild", but they realized that computers are now fast enough to handle their huge code sizes, not that computers always have handled small efficient code bases well.


Subscribe to Golden emails