What's new in Nullcast

We ship every week, so you can ship more great designs.

  • All Post
  • Fix
  • Announcement
  • Improvement
Coding 3D with BabylonJS


Everyone's talking about the cliche things like developing an application, software, fixing bugs, merging PRs etc etc...  Who doesn't need a change, huh ! Everyone's impressed by the movie named Avatar [ just a hype ], right ? For a change, why don't you try something creative like that, I mean, obviously in a small scale, really small scale ?

I am talking about the same 3D thing that you have in mind. Enter BabylonJS - a cool, suave 3D Engine.

What's that

Babylon.js is a real time Javascript 3D engine for displaying 3D graphics in the browser s via HTML5. The source code is distributed under the Apache License 2.0, available on GitHub. It was initially released in 2013 under Microsoft Public License. It was initially developed by two Microsoft employees. David Catuhe created the 3D game engine and was helped by David Rousset (VR, Gamepad and IndexedDB support) mostly in their free time as a side-project. They were also helped by artist Michel Rousseau who contributed several 3D scenes.

With 18.6K stars, 547 watching and 2.9K forks, Babylonjs is rocking the 3D Engine world.

Tech Behind

The source code is written in TypeScript, compiled into a JavaScript version. The JavaScript version is available to end users via NPM or CDN. The Babylon.js 3D engine make use of WebGL for the 3D rendering.

The created models are making use of a shader program, determinig the pixel positions and colors on the canvas on which the user rendered, using polygon models, textures, camera and lights along with 4 x 4 world matrices for each of the objects storing their positions, rotation and scaling. Producing photo realistic images is done using the same method of physically based rendering along with post-processing ones. For simulating collisions, either Cannon.js or Oimo has to be plugged in to BabylonJS. Animations are done using key frame methods called animatables. The full character animation is done using skeletons with blend weights.

Whats the kick

The real kick comes when we render a scene on the canvas with some cool interactions. Before that, let's see the workflow of a Babylon project.

In Babylon, everything works inside a canvas. For a canvas to render stuff, there should be a rendering engine. For that, we use Babylon’s Rendering Engine. So basically the flow would be Babylon Engine -> Canvas -> What we see

Lets code...

First, We would like to type safe our initial variables like scene, engine, camera etc.

private scene: BABYLON.Scene;
private engine: BABYLON.Engine;
private canvas: HTMLCanvasElement;

Then in the useEffect (for Reactjs and Nextjs), we can give actual values to those variables.

useEffect(() => {
    engine = new BABYLON.Engine(canvas, {...options});
    scene = new BABYLON.Scene(engine);
    engine.runRenderLoop(() => {

// canvas is selected using an id attribute

If you run the program now, you will see a white screen in the browser. Why? Because we are rendering nothing. Lets add a camera, we need one, don't you think ?

const camera = new BABYLON.FreeCamera('camera',
    new BABYLON.Vector3(0, 5, -10),

What we did was:

  1. Create a FreeCamera, means a TPP view camera. There is UniversalCamera if you need FPP view
  2. Position the camera at (X, Y, Z) coordinates
  3. Append/Attach the camera to the scene

Now the camera has been defined. But nothing will appear if there is no light. Yeah.. thats right. You have to specify everything. Lets attach a light.

const light = new BABYLON.HemisphericLight('light', 
    new BABYLON.Vector3(0, 1, 0),

What we did was:

  1. Define a Hemispheric/Ambient light. Feel free to do a research on other types of lights provided
  2. Position the light on X,Y,Z coordinates
  3. Append the light on to the scene

There are many attributes for all the stuff that we have done so far. There is intensity of the light, if you want to adjust the intensity of the same, default is 1.0. Likewise, all the attributes of camera/light/scene etc can be changed according to our needs.

For the sake of the 3D creation excitement, lets create a sphere. Hey, one sec. If you are gonna create a sphere, where are you going to place it? In the vacuum??? Go, let's create a ground first. I think now you got what I meant by specifying whatever we need. Okay, First create a ground and then a sphere.

const ground = BABYLON.MeshBuilder.CreateGround('ground',
    { width: 10, height: 10 },

What we did:

  1. Create a ground named ‘ground’
  2. Specify the width and height as 10 units
  3. Attach the ground to the scene, don't forget this !

const sphere = BABYLON.MeshBuilder.CreateSphere('sphere',
    { diameter: 2, segments: 20 },
sphere.position.y = 1;

What we did here is:

  1. Create a sphere with diameter 2 and 32 segments
  2. Append the sphere to the scene

The final result when you run the app:

Here you can move and see around using the mouse just like we are in a game, but not navigate. Navigation can be implemented by using some advanced techniques, later on that.

Find the playground code here: https://playground.babylonjs.com/#2KRNG9#1140

Here is the first checkpoint, understanding the basics…

Now feel free to do a research on:

  1. Cameras
  2. Lights
  3. Animations
  4. MeshBuilder
  5. Environment
  6. Events

You can find cool BabylonJS examples here: BabylonJS Examples

Here are some cool ideas to get you started:

  1. Download a Human 3D model and get it walking across the ground
  2. Create a simple ball bouncing game
  3. Create an environment with changing level of details
  4. And So on….


Dont think that BabylonJS is the only gun in the 3D world... The rivals include:

  1. Three.js
  2. Greensock
  3. PlayCanvas
Learn more
Subscribe to
Monthly Updates
With tools to help you get your work done better. Subscribe and get Vitaly’s Smart Interface Design Checklists PDF — in your inbox. 🎁
How Nextjs Image is so optimized


Nextjs Image is so optimized that it loads almost instantly after the first time. Next is doing a really great job behind the scenes regarding the type of image files and quality of the same and decides at which quality it should be rendered. Yes, even though the image might be 4K, its not rendered in 4K.

Looking at the Code

We can download the code of Nextjs from its github repo and take a look at the image component image.tsx. The associated files would be image-optimizer.ts and image-config.ts




From the path itself, it is known that image.tsx is for client side and the optimizer is for server side.

Client Side

Just a reminder that Image component takes a lot of props.

✅ For different layouts, Next set different sizes in terms of width and height.

✅ Next checks if there is blur option from user. If there is, it sets the blur image as the user provided one and if not, it sets its own.

✅ If src starts with ‘data:’ or ‘blob:’, it is set as unoptimized, shown as raw image

Errors for LCP, not providing width and height, not providing src etc are handled beautifully

⚡ For priority mode images, they should be loaded lot faster. Next does this by adding a Head tag with rel="preload" along with rendering image

⚡ Behind the scenes, the img tag make use of a useCallback of a function which handles loading, taking dependencies as src(attr), placeholder(attr), onLoadRef(fnc), onLoadingCompleteRef(fnc), setBlurComplete(fnc), onError, unoptimized(attr

⚡ When it comes to large resolution images, Next only shows 2x resolution; i.e. 2x2 pixels per dot of resolution, because 2x is the max resolution a human eye can get the details from. Even if the details were 3x, humans cannot see that LOD through it and it consumes a large amount of data. This means, whatever the width that the user gives as attribute to the image component, the maximum resolution that Next would render is 2 times the width. If 250 is the width given, max of 640px would be rendered (2x would be 500. 640 is the next possible standard screen resolution)

And many other things are going on in the client side.

Server Side

✅ Server side make use of Sharp, an image processing package, which is faster and lighter for processing images. Sharp has a lot and lot of methods for manipulating the image in whatever we want the image to be. You might wanna check the methods that are available, they are really cool

✅ Then, making use of a package called Squoosh, another cool image processing package, Next handles the processing of images with desired quality depending on the type

✅ After all the processing, headers like cache control, content disposition, content security policy, ‘X-Nextjs-Cache’, content type, content length etc are sent along with the response. The important thing to mention is that almost all file types are converted equivalent to that of webp for more optimized loading

⚡ When server calls the ImageOptimizerCache (which is the class name for image-optimizer), the image(s) is processed first like mentioned above and are cached inside ‘_next/image’, which is done by ImageRouting function inside server. Any stale cache are periodically removed by Next itself

Error Handling

All types of error cases, for width, height, src, srcSet, blurImage, onLoad, onLoadComplete, wrong path etc are handled by the Next beautifully and it can be understood if you look the code itself.

Handling width property associated with fill attribute

Error code example

Handling src attribute

Error code example

Handling width and url properties

Error code example


Combine the server actions with the front end actions, you get beautifully optimized Image component, which delivers super fast image loading time and cached response. Thats how Nextjs <Image/> works.

Learn more

Want #swag?

Join our monthly raffle!

Every month, One Lucky Duck gets free swag shipped to their doorstep, wherever in the world you are! All you have to do is join our Discord channel today and tweet about the amazing things we do. #nullcast #luckyduck

We will announce the winners on Twitter and through our discord channel.