Introduction
There are literally trillions of trees on our planet. A typical mountainside in a densely forested area like British Columbia could be host to over a million trees. Rendering the corresponding scene in a virtual 3D environment obviously has its challenges.
In this blog post, I take a closer look at rendering all these trees and related ground cover like grass as part of the ongoing lifelike virtual world series.
There are literally trillions of trees on our planet. A typical mountainside in a densely forested area like British Columbia could be host to over a million trees. Rendering the corresponding scene in a virtual 3D environment obviously has its challenges.
In this blog post, I take a closer look at rendering all these trees and related ground cover like grass as part of the ongoing lifelike virtual world series.
Global Tree Map
https://www.washingtonpost.com/news/energy-environment/wp/2015/09/16/the-countries-of-the-world-ranked-by-their-tree-wealth/
https://www.washingtonpost.com/news/energy-environment/wp/2015/09/16/the-countries-of-the-world-ranked-by-their-tree-wealth/
Forest Data
When determining where to place a tree geographically in a lifelike virtual world, one typically looks to a forest density or tree cover map of sorts. With the attention the environment receives these days, you can imagine there is no shortage of data about our planet’s forests. In British Columbia, especially with the onset of forest fires, there are numerous resources for forest data like the species map shown below.
When determining where to place a tree geographically in a lifelike virtual world, one typically looks to a forest density or tree cover map of sorts. With the attention the environment receives these days, you can imagine there is no shortage of data about our planet’s forests. In British Columbia, especially with the onset of forest fires, there are numerous resources for forest data like the species map shown below.
BC Tree Species Map
https://www2.gov.bc.ca/gov/content/industry/forestry/managing-our-forest-resources/forest-inventory/data-management-and-access
https://www2.gov.bc.ca/gov/content/industry/forestry/managing-our-forest-resources/forest-inventory/data-management-and-access
These forest maps are good for relatively broad areas but when you get down to the surface and look to place a tree at a particular location, you need more information about the terrain like the slope, and whether there are other features that would preclude placing a tree at that location, like a river. To achieve this, one can generate forest maps offline by combining layers like forest density maps, canopy height maps, species maps, road data, river data, etc., where in the last two cases, for example, the roads and rivers would act as “masks” to indicate areas where no trees should be placed. This type of activity is already being done by scientists and forest managers around the world to better understand our global forest inventory and aid in making timely and informed decisions about climate, conservation and logging.
These maps are often derived from freely available satellite imagery. However, mapping forests from an aircraft using a Light Detection and Ranging (LiDAR) scanning system is starting to become mainstream. In this scenario, the region’s governing body typically will hire a third-party contractor to fly the area of interest, process the data and provide the government customer with a dataset that can be browsed on a PC. These datasets are often processed down to sub-meter resolution, as high as 1cm between points. With this high-resolution data, individual trees start to show up.
These maps are often derived from freely available satellite imagery. However, mapping forests from an aircraft using a Light Detection and Ranging (LiDAR) scanning system is starting to become mainstream. In this scenario, the region’s governing body typically will hire a third-party contractor to fly the area of interest, process the data and provide the government customer with a dataset that can be browsed on a PC. These datasets are often processed down to sub-meter resolution, as high as 1cm between points. With this high-resolution data, individual trees start to show up.
LiDAR dataset shows tree cover
https://www.nsnews.com/local-news/north-van-firm-looks-to-measure-tree-canopies-from-space-5411760
https://www.nsnews.com/local-news/north-van-firm-looks-to-measure-tree-canopies-from-space-5411760
If we take this a step further, one can imagine it won’t be long before we have a tree inventory database that contains the location and information about every tree in a region. This is already happening in high-focus areas like BC where the province is developing an Individual Tree Database as part of their LiDAR BC program.
However, many other areas around the world are not at this level of tracking. So, we still need to apply maps and heuristics to place trees such that the virtual mountain looks correct and appealing.
Having said that, a potentially useful concept to manage tree data is to design a format to contain the individual tree locations as well as the tree’s characteristics. This is what the ESRI shapefile was very good for back in the day, defining locations and accompanying attributes. Today, the 3D Tiles specification might be more appropriate to store individual tree locations, especially because it adds a pointer to a 3D model to render the tree. More on this later. The point being here is once the format for capturing individual trees is defined and implemented, third party forest management organizations like the Province of BC could upload their own individual tree database thus replacing the generalized positions with actual sensed/analyzed positions, but that’s a future endeavour.
Forest Rendering
Now that we know where to place the trees on the terrain, we need to determine how to render them. If you look at what current game engines do today, the approach is to place a 3D tree model at its desired location in a scene editor. This tree model will then have multiple levels of detail (LOD) that will be managed by the game engine at runtime. In short, when you are far away from the tree, a low level of detail is used and as the observer gets closer, the game engine introduces progressively higher levels of detail until the observer is close enough to see the details in the bark. And with all the optimizations these game engines have implemented over the years, a game developer can get pretty good performance rendering thousands of trees in a scene. But an avatar flying around a true-scale planet sees a lot more than thousands of trees. And while Unity and Unreal could certainly handle more, the general architecture is not designed to handle millions of trees at once.
Having said that, a potentially useful concept to manage tree data is to design a format to contain the individual tree locations as well as the tree’s characteristics. This is what the ESRI shapefile was very good for back in the day, defining locations and accompanying attributes. Today, the 3D Tiles specification might be more appropriate to store individual tree locations, especially because it adds a pointer to a 3D model to render the tree. More on this later. The point being here is once the format for capturing individual trees is defined and implemented, third party forest management organizations like the Province of BC could upload their own individual tree database thus replacing the generalized positions with actual sensed/analyzed positions, but that’s a future endeavour.
Forest Rendering
Now that we know where to place the trees on the terrain, we need to determine how to render them. If you look at what current game engines do today, the approach is to place a 3D tree model at its desired location in a scene editor. This tree model will then have multiple levels of detail (LOD) that will be managed by the game engine at runtime. In short, when you are far away from the tree, a low level of detail is used and as the observer gets closer, the game engine introduces progressively higher levels of detail until the observer is close enough to see the details in the bark. And with all the optimizations these game engines have implemented over the years, a game developer can get pretty good performance rendering thousands of trees in a scene. But an avatar flying around a true-scale planet sees a lot more than thousands of trees. And while Unity and Unreal could certainly handle more, the general architecture is not designed to handle millions of trees at once.
Levels of detail of a 3D tree model
One of the issues is placement. With game engines, tree placement is done ahead of time in an editor. When rendering the planet’s trillions of trees, we obviously need to get a little craftier. This is why we use forest maps and other data to generate tree positions at runtime on the GPU. But once placed, we still need to render a 3D tree, or at least something that resembles a tree.
A simple approach by many games to rendering many trees at once is to use “billboards,” which is a simple 2D trick used to mimic a 3D tree. The lowest level of detail in the example above is usually a billboard.
A simple approach by many games to rendering many trees at once is to use “billboards,” which is a simple 2D trick used to mimic a 3D tree. The lowest level of detail in the example above is usually a billboard.
Tree billboard example
Billboards can go a long way to rendering a massive number of trees but the runtime overhead of placement and rendering combined may still cause some visual glitches to the user. You may have seen this in large-scale virtual worlds where the 3D trees “pop” in and out of the scene, similar to the terrain popping I described in my Terrain Rendering blog post. We can use fading and other techniques to help minimize the popping but this still does not alleviate the issue 100%.
Regardless of whether you’re rendering a 3D model of a tree or a billboard, GPU instancing is now commonplace on most video cards. This involves uploading a renderable object to the GPU and then instructing the GPU to render many, many instances of this object at specified locations. The optimizations in the graphics libraries and the hardware itself are impressive indeed resulting in tens if not hundreds of thousands of objects rendered per frame. I raise this point to make you aware this is the core technique to be used to render thousands of trees but, even still, due to placement AND rendering, is still not sufficient for our planet-scale world.
During my many drives around BC (highly recommend btw), I would often look out the car window and see the beautiful landscapes. Snow-topped mountains covered in vast numbers of trees, with exposed cliff faces, waterfalls and shadows adding depth and beauty to the scene. When I looked at all those trees, I couldn’t really see the individual trees at that distance; it looked more like a shag rug covering the hillside. I thought “Ah ha! Why not render a shag rug at very long distances?” This concept would pop back into my head every now and again until one day, when I watched a video by Inigo Quilez, where he planted a forested landscape using a similar approach.
Regardless of whether you’re rendering a 3D model of a tree or a billboard, GPU instancing is now commonplace on most video cards. This involves uploading a renderable object to the GPU and then instructing the GPU to render many, many instances of this object at specified locations. The optimizations in the graphics libraries and the hardware itself are impressive indeed resulting in tens if not hundreds of thousands of objects rendered per frame. I raise this point to make you aware this is the core technique to be used to render thousands of trees but, even still, due to placement AND rendering, is still not sufficient for our planet-scale world.
During my many drives around BC (highly recommend btw), I would often look out the car window and see the beautiful landscapes. Snow-topped mountains covered in vast numbers of trees, with exposed cliff faces, waterfalls and shadows adding depth and beauty to the scene. When I looked at all those trees, I couldn’t really see the individual trees at that distance; it looked more like a shag rug covering the hillside. I thought “Ah ha! Why not render a shag rug at very long distances?” This concept would pop back into my head every now and again until one day, when I watched a video by Inigo Quilez, where he planted a forested landscape using a similar approach.
The technique Inigo presents (in a simple and easy to understand way I might add – he’s great) produced excellent results as can be seen above. However, it may not quite be the shag rug approach that I was imagining. Regardless, as usual, he offers inspiring methods that can be used as food for thought to expand the shag rug concept. Something I will do in the not-so-distant future.
Other aspects of rendering the forest are the tree species, environment and seasons. In many video games, especially higher end AAA games, they incorporate the idea of biomes: “A large naturally occurring community of flora and fauna occupying a major habitat.”
Other aspects of rendering the forest are the tree species, environment and seasons. In many video games, especially higher end AAA games, they incorporate the idea of biomes: “A large naturally occurring community of flora and fauna occupying a major habitat.”
Biomes in Ghost Recon Wildlands
https://666uille.wordpress.com/wp-content/uploads/2017/03/gdc2017_ghostreconwildlands_terrainandtechnologytools-onlinevideos1.pdf
https://666uille.wordpress.com/wp-content/uploads/2017/03/gdc2017_ghostreconwildlands_terrainandtechnologytools-onlinevideos1.pdf
Each biome in a video game would typically contain its own set of 3D models, materials, textures, shaders, etc. representing the “look and feel” of that particular biome. The results are amazing. From Ghost Recon Wildlands to Red Dead Redemption 2, some of the scenes are jaw-dropping. I can’t wait to get Bending Time’s lifelike virtual world to that point, though the approach might be different still.
With all the map data around the globe, we may not need to constrain ourselves to a fixed set of biomes. Instead, we might be able to procedurally generate and place 3D models according to the map data. For example, we know the average temperature and humidity levels for every area of the globe, we can use this data (along with other data as needed) to manipulate or select the models, materials, etc. for that area.
The procedural algorithms to do this work are non-trivial, especially to create visually appealing and coherent scenes like those seen in AAA games. This is exemplified by the video game developers still needing artists to correct and polish the final scene. My idea for Bending Time is to use the lifelike virtual world user community effectively as the artists in this case. An ideal scenario is Bending Time hosts the base data and users can correct it, like a 3D globe Wikipedia of sorts. This is a bit of a moonshot idea but hopefully we’ll get there one day.
Grass Rendering
A closely related topic to forests is grass rendering. When it comes to grass, you can easily appreciate the fact there are orders of magnitude more blades of grass on the planet than there are trees. The good news is they are only applicable in a 3D scene up to a certain distance. Grass has very little contribution to a scene when it is, let’s say, 10 km away from the observer.
Brano Kemen wrote a very nice article years ago about the grass rendering in Outerra. He used three distance-from-camera levels of fidelity. It’s amazing he wrote this article 12 years ago! In fact, his whole Outerra project was way ahead of its time.
With all the map data around the globe, we may not need to constrain ourselves to a fixed set of biomes. Instead, we might be able to procedurally generate and place 3D models according to the map data. For example, we know the average temperature and humidity levels for every area of the globe, we can use this data (along with other data as needed) to manipulate or select the models, materials, etc. for that area.
The procedural algorithms to do this work are non-trivial, especially to create visually appealing and coherent scenes like those seen in AAA games. This is exemplified by the video game developers still needing artists to correct and polish the final scene. My idea for Bending Time is to use the lifelike virtual world user community effectively as the artists in this case. An ideal scenario is Bending Time hosts the base data and users can correct it, like a 3D globe Wikipedia of sorts. This is a bit of a moonshot idea but hopefully we’ll get there one day.
Grass Rendering
A closely related topic to forests is grass rendering. When it comes to grass, you can easily appreciate the fact there are orders of magnitude more blades of grass on the planet than there are trees. The good news is they are only applicable in a 3D scene up to a certain distance. Grass has very little contribution to a scene when it is, let’s say, 10 km away from the observer.
Brano Kemen wrote a very nice article years ago about the grass rendering in Outerra. He used three distance-from-camera levels of fidelity. It’s amazing he wrote this article 12 years ago! In fact, his whole Outerra project was way ahead of its time.
Procedural grass rendering in Outerra
https://outerra.blogspot.com/2012/05/procedural-grass-rendering.html
https://outerra.blogspot.com/2012/05/procedural-grass-rendering.html
One of the important reasons to render grass in a game is to cover up flat and often repetitive ground textures. Having waving grass effectively brings the ground to life increasing the realism and overall immersion of the user into the scene. There are numerous other resources available on the topic, but Kemen’s article lays out the basics, so I’ll leave the actual rendering part at that for now.
Regarding the placement of grass, the geographic areas not the blades, it is similar to forests in that we use map data to determine what areas of the ground are covered in grass. A common source of data to determine what’s on the ground is called land use or land cover [classification] data. Most nations maintain data on their citizens’ use of the land to support a variety of planning activities. For example, Canada maintains land cover data to track the percent of land being used for agricultural purposes. Below is a snippet of this data for the Metro Vancouver area.
Regarding the placement of grass, the geographic areas not the blades, it is similar to forests in that we use map data to determine what areas of the ground are covered in grass. A common source of data to determine what’s on the ground is called land use or land cover [classification] data. Most nations maintain data on their citizens’ use of the land to support a variety of planning activities. For example, Canada maintains land cover data to track the percent of land being used for agricultural purposes. Below is a snippet of this data for the Metro Vancouver area.
Land cover data for the Metro Vancouver area
https://open.canada.ca/data/en/dataset/16d2f828-96bb-468d-9b7d-1307c81e17b8
https://open.canada.ca/data/en/dataset/16d2f828-96bb-468d-9b7d-1307c81e17b8
This data is processed from open satellite imagery, where the source data often has a resolution of 30 meters. It seems like this could be useful for our grass placement but, when you zoom in, there are so many variations to what is actually on the ground that this resolution of land cover data is close to useless when it comes to placing and rendering grass. For example, if we look at the highlighted red square on the right-hand side of the above image, we can download a much higher resolution optical aerial image in that region to explain this point.
Optical Aerial Image from the Township of Langley’s Open Data Portal
https://data-tol.opendata.arcgis.com/
https://data-tol.opendata.arcgis.com/
An aerial image like this gives us a much better sense of what is on the ground. Looking at what appears to be a walking path in the upper portion of the image, imagine virtually walking down that path and seeing grass rendered in the field to the left and right of us. This makes sense; the border between the path and the grass seems clear. But now imagine walking down the street in the little suburban area. It becomes less clear exactly where the road ends and the grass on the front yards starts. This level of fidelity in mapping seems crazy but, as a society, civilization even, we are getting there.
Having said that, with grass, the placement can be generalized even further because not too many people remember a particular patch of grass in their fondest memories, like they might in the case of a memorable tree. For forested regions, the ground cover is particularly not memorable, which is why generalized biome data, like that from AAA games, might be a good way to start for a lifelike virtual world.
Ground Textures
Obviously, there are many other things on the ground that make up a virtual scene. In pretty much every video game that has terrain, the textures used on the terrain mesh form the foundation of what to render everything else on top of. From this perspective, this blog post is backwards, working its way down from the forest canopy to the ground. Going backwards tracks to my personality, but I digress.
When texturing the ground, game developers and artists typically use a “tileable” texture. A tileable ground texture would typically span about 2-3 meters of ground cover and then repeat itself when you get to the border of the image.
Having said that, with grass, the placement can be generalized even further because not too many people remember a particular patch of grass in their fondest memories, like they might in the case of a memorable tree. For forested regions, the ground cover is particularly not memorable, which is why generalized biome data, like that from AAA games, might be a good way to start for a lifelike virtual world.
Ground Textures
Obviously, there are many other things on the ground that make up a virtual scene. In pretty much every video game that has terrain, the textures used on the terrain mesh form the foundation of what to render everything else on top of. From this perspective, this blog post is backwards, working its way down from the forest canopy to the ground. Going backwards tracks to my personality, but I digress.
When texturing the ground, game developers and artists typically use a “tileable” texture. A tileable ground texture would typically span about 2-3 meters of ground cover and then repeat itself when you get to the border of the image.
Tileable ground texture
https://www.independent-software.com/tileable-repeatable-hires-terrain-textures-for-download.html
https://www.independent-software.com/tileable-repeatable-hires-terrain-textures-for-download.html
One of the downsides to tiling ground textures is the tile grid starts to make itself apparent over larger distances as shown below.
A common approach to reduce tiling artifacts is texture splatting. This is the process of selecting a set of materials to represent the ground (say 4) and using a control image to indicate which materials, and to what extent, should appear at each pixel in the control image. This requires the ground textures to support transparency. The results can be quite effective.
Another ground texture technique used to enhance our terrain is bump mapping. Bump mapping is a 3D technique for rendering realistic textures on surfaces that otherwise would look flat.
This is all fine and dandy but how do we *ahem* mesh together aerial imagery from the mapping industry with ground textures from the video game industry? If we are using imagery that is less than a few meters in resolution (the Township of Langley image shown previously has a resolution of 7.5 cm) then I think it’s clear that the aerial imagery should form the basis of our ground texture. The key will be in blending in colour-appropriate ground textures and applying bump maps to increase the visual fidelity of the terrain.
This is where I left off on the subject when I was working on Bending Time back in 2021. I will pick this up as part of the enhanced terrain work I’ve mentioned before. I look forward to showing you some results!
Videte silvam ad arbores.
--Sean
This is where I left off on the subject when I was working on Bending Time back in 2021. I will pick this up as part of the enhanced terrain work I’ve mentioned before. I look forward to showing you some results!
Videte silvam ad arbores.
--Sean