Bending Time
  • Home
  • Services
  • Blog
  • About

Lifelike Virtual World Blog

The Business of Bending Time

9/10/2024

0 Comments

 
I kept my previous blog post, “The Business Of Virtual Worlds,” intentionally generic so it could be read and understood by [hopefully] everyone across the general topic of virtual worlds. I want people to understand how a lifelike virtual world fits into the grand scheme of our digital futures. But that left a void where I wanted to discuss my experiences and the approach I’ve taken with Bending Time, which I am addressing here.
 
To me, it’s plain as day, a lifelike virtual world is going to play a significant part of our future daily lives. We will conduct most of our online activities in a true-to-life digital rendition of our world because it will feel familiar and comfortable to the majority of the population. Where possible, it will enable organizations to conduct their business in a similar way to the physical world. The popularity of fantasy virtual worlds will certainly increase over time, but it will remain limited to a smaller minority of the overall global population. I’m not denigrating these worlds, fill your boots, I just don’t think they will become as widely used as a lifelike virtual world.
 
Bending Time is going to capture the lifelike virtual world market by starting with the monetization of real-world simulator games as a beachhead and grow into other markets from there. Ultimately, working towards a global-scale lifelike virtual world.
 
Innovator’s Dilemma

When I first read Clayton Christensen’s “Innovator’s Dilemma,” I felt I really learned something about the business world. It really piqued my interest. In the years that followed, as an innovator, I witnessed the dilemma firsthand in my corporate job. Large corporations cater to their current customers and processes, and struggle to break out of that mold to create a new product for new customers and/or in a new market. MBA managers are taught to effectively not do this. Execute to the known quantity, stay on target, etc. This was recently brought to light in Paul Graham’s “Founder Mode” essay.
 
Planet-Scale

From my experiences on various projects, mainly the radar simulator, I firmly believe that a planet-wide virtual world is where the real innovation is. There were problems that I could see back then that, if solved, would open up new opportunities. The trouble is, as I discussed in my previous blog post, you don’t need to model the whole planet for most business cases. If you want to create a 3D virtual mine site, for example, gather maps for that area and build the virtual site for just that region. Want to do a 3D mashup for a new capital development? Pull in the data you need just for that area and away you go. The problem comes when the cost per project is repeated. This is where the planet-wide virtual world “solves a problem.” However, individual customers don’t want to pay for you to solve the planet-scale problems.

To help illustrate the challenges, a very large game world today could be 100 km x 100 km. However, if you are in a flight simulator, you can see for almost a thousand kilometers in any direction! It’s a whole ‘nother scale. And the approach to building the planet-wide lifelike virtual world is very different compared to the approach to building a video game.
 
Business Cases

Here are a few business cases to help understand the potential of the lifelike virtual world.
 
Driver’s Ed: There are 2 million new drivers each year in the US, and schools still use static PDFs (hard copies even!) to teach them how to be safe on the road. An accessible lifelike virtual world, putting young drivers in virtual traffic, could easily be monetized. “But Sean, there will be no drivers once the world shifts to self-driving vehicles.” Self-driving vehicles will roll out in key areas, but us humans will be driving for a long time still.
 
Online Simulator Training: Early on, I worked with a marine captain who was very interested in setting up his own school to teach students. He was very excited for the possibility for him to share his years of knowledge potentially with students across the globe. He loved the idea of Bending Time supporting his passion. The problem came down to who will fund this? Marine training schools had zero interest.
 
Visualizations: Some time later I managed to join the team of an engineering company who was bidding on a new greenway project for the City of Vancouver. I pitched the idea of developing a virtual environment for the greenway and making a VR experience for the public. They would be able to hop on a virtual version of the proposed streetcar and ride down the greenway. They loved it! However, the funds never materialized, and the government changed hands and cancelled the project.

Virtual Tourism: I spoke with a couple of social coordinators at senior living homes about the possibility of the residents using VR to visit various destinations around the virtual globe. It would provide an opportunity to keep their minds active when their bodies may not permit as much. They loved it! In fact, Rendever continues to forge a path for this business case today. However, it turns out we don’t really care enough about our elders to pay for this. It’s sad. One day, we’ll be able to feed their mind’s hunger and keep them happy as possible in their final years.

This may all sound like me lamenting the fact I wasn’t able to close a deal, and maybe it is to some extent, but there’s a real point to be made here. But before I make this point, I need to address the “right way to do things.”
 
The Textbook Approach

The textbook approach to startups is to first discover a customer’s pain points and then build to alleviate those. Paul Graham has been giving this advice to Y Combinator enrollees for decades. It is widely accepted as the way to get things done in the startup world. And for any incremental startup, which probably account for more than 90% of all of them, I would agree. However, there are some startups where the market is clear, it’s just a lot of work to build. We often refer to these as moonshots. For example, did Sam Altman “prove the market” before embarking on his OpenAI journey? No, of course not. Did Blake Scholl get customer letters of intent before even considering launching Boom Supersonic? He may have got LOIs but that would’ve been lip service for someone else. As I tweeted, “you gotta love supersonic jets to invest in supersonic jets.” The point is the lifelike virtual world, ultimately, the metaverse, may be one of the last big moonshots in software.
 
The lifelike virtual world is the moonshot that ultimately opens the door to the 3D metaverse.
 
Games

Through my trials and tribulations on Bending Time, I learned the gaming industry is still one of the best markets to launch and have users adopt new technology. This is why I plan to start with simulator games, despite the market size being smaller.

The "minimum viable product" for Bending Time is a 3D Earth that showcases video game elements including video game-quality terrain, massive forest rendering and dynamic ocean rendering. The user will be able to fly around the planet from space to surface enjoying the beauty of the natural world.

The planet-wide technology and prototypes are mature enough such that with a modest financial investment, the initial showcase app can be launched.

The possible simulation games being considered after launch include:

  • A driving simulation along the Sea-to-Sky Highway
  • A boating/wildlife experience in the Great Bear Rainforest
  • A virtual train ride through the Canadian Rockies

Conclusion

For all the reasons discussed above, I cannot rely on a “customer’s problem” in order to take this moonshot. The incremental path of going from one small project to another in hopes of amassing enough fuel to go to the moon is just not viable. The technology is ready, there are enough business cases to prepare the ship, I just need some fuel and a small crew and we’re off!
0 Comments

The Business of Virtual Worlds

9/7/2024

0 Comments

 
Since cavemen watched birds fly overhead, humanity has dreamt of vast worlds where we can escape the bounds of our reality. A hundred millennia later, Keanu Reeves captivated us with his portrayal of Neo in The Matrix. And now, with the advent of virtual reality goggles and amazing 3D worlds, we are getting closer to the possibility of living out this dream.
​
My first virtual world experience was playing Colossal Cave Adventure on our Apple IIe; one of the first text-based adventure games.
Picture
Colossal Cave Adventure – Will Crowther
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure

​After text-based adventure games, Multi-User Dungeons (MUDs) became popular followed by a series of historical events, all points on a path toward a digital future some call the Metaverse, as depicted by McKinsey & Company in their “Value creation in the metaverse” report.
Picture
History of the Metaverse – McKinsey & Company
https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/value-creation-in-the-metaverse

MMOs

After text-based adventure games and MUDs, massively-multiplayer online (MMO) games hit the ether. They represented the first wave of graphical virtual worlds.

There are different types of MMOs including MMO Role-Playing Games (MMORPG) and MMO First-Person Shooters (MMOFPS) among others. A role-playing game is where the player takes on a character in the game like a hero or wizard. A first-person shooter game is where the game camera is from the player’s perspective, oftentimes with a ballistic weapon of some sort. Call of Duty is probably the most recognized example of an FPS game. For the purposes of this post, I’m going to lump them all into the MMO category as their differences are not material to this article.

One distinctive feature of an MMO is the world persists online 24/7. This significantly increases the cost to develop the game but also adds ongoing maintenance costs of the servers hosting the world and its virtual inhabitants.
​
The first so-called “big three” MMOs were:
​
  • Ultima Online – released in Sept 1997
  • EverQuest – released in March 1999
  • Asheron’s Call – released in November 1999
 
While there were many MMOs before, these ones eventually became host to many, many online players ushering in a new era of online play for gamers around the world.

Picture
Ultima Online
https://www.mobygames.com/game/1762/ultima-online/

Along with these new games came new business models. In addition to buying the game box at a retail outlet (now referred to as buy-to-play or B2P), users had to pay a monthly subscription fee. This is known as pay-to-play (P2P). As sales numbers showed, the variety and interactions with other human players (as opposed to playing a game on your own) provided enough entertainment value to justify this new cost to gamers.
​
Since the success of early MMOs, game developers scrambled together teams to develop the next big MMO in hopes of capitalizing on this newly unlocked revenue stream. A list of popular MMOs from the early 2000s onwards are listed below with their release dates.

  • Entropia Universe – May 2002
  • Final Fantasy XI – November 2002
  • EVE Online – May 2003
  • World of Warcraft – November 2004
  • Club Penguin – October 2005
  • Guild Wars 2 – August 2012
  • Elder Scrolls Online – April 2014
  • Final Fantasy XIV – August 2013
  • Black Desert Online – July 2015
Picture
World of Warcraft
https://worldofwarcraft.blizzard.com/en-us/

Most of these MMOs followed their predecessors and charged their users a monthly subscription fee… except for Entropia Universe.

Entropia Universe, which I am the least familiar with and only just recently heard about from someone on X, employed a model where their primary revenue stream came from users paying real money in exchange for in-game currency known as Project Entropia Dollars (PED). Users would then use this currency to purchase virtual items that other users had created, establishing one of the first virtual economies.

This model from Entropia Universe’s developer, MindArk, is a form of business model known as microtransactions. Microtransactions involve users purchasing virtual goods for small sums of money. This model is often employed with free-to-play games: the game is free to download and fully functional to play but additional content can be purchased in-game.

The free-to-play model was exemplified by the kids MMO Club Penguin. Young people (including my own kids) could join the online Flash game for free, create their own penguin character and interact with other user-controlled penguins playing games right away, competing in riddles and even earning in-game coins (which were quite *ahem* flashy). Users were offered additional content like larger home igloos, access to parties, and the ability to adopt these cute little pets known as Puffles, by paying a monthly fee. (Or rather, their parents paying a monthly fee.) Club Penguin went on to pull in $40 million per year at its peak, quite a success story for a few Canadian game developers.
​
I’ve introduced several terms and business models, here's a quick summary:
Picture
MMO Business Models
Virtual Worlds
 
Parallel to MMO games are virtual worlds. The general difference between the two is games, in general, have a goal. The player sets out to achieve something, defeat an enemy or what have you. In contrast, virtual worlds are “open worlds” where there are no specific tasks; the user is free to roam the environment as they please. Below is a list of prominent virtual worlds shown with their release dates.

  • Active Worlds – June 1995
  • Habbo – August 2000
  • Entropia Universe – May 2002
  • Second Life – June 2003
  • IMVU – April 2004
  • Roblox – September 2006
  • Minecraft – November 2011
  • VRChat – January 2014
  • Fortnite – July 2017
  • Decentraland – February 2020
 
Note I intentionally included Entropia Universe on both lists. It does appear to meet the criteria of both an MMO and a virtual world. One could argue the same for Fortnite but the world doesn’t persist 24/7 like an MMO. In Fortnite, environments are spawned when a group of users launches a session. Having said that, if I put on some fuzzy glasses, I could (and do) call MMOs virtual worlds, in the general sense. An online environment.
 
When it comes to the business model of virtual worlds, we can imagine it’s a bit different from games. After all, for the average gamer (who is male), heading out on an adventure to make a conquest and “win” something is built into their DNA. In an open world, what is the value the young male gamer receives from doing… nothing? This is where things get interesting.
 
Let’s take Second Life, for example. Its creator, Linden Lab, provides free access to its virtual world making it a free-to-play business model. However, Linden Lab doesn’t ask its users to pay a monthly subscription fee either. (Though some monthly subscriptions may be available for premium access.) So how do they make money? They do so by enabling users to create items in-world and sell them using Linden’s own virtual currency known as the Linden Dollar (L$). Users can purchase Linden Dollars using US dollars on the LindeX exchange thus enabling a completely user-driven virtual economy. This is special. It somewhat removes Linden Lab from direct control over its business and leaves it in the hands of its users, who are quite entrepreneurial btw. From Wikipedia: “In 2009, the total size of the Second Life economy grew 65% to US$567 million, about 25% of the entire U.S. virtual goods market. Gross resident earnings are US$55 million in 2009 – 11% growth over 2008.”
​
One can make the argument that Second Life pioneered the user-generated content business model.
Picture
Second Life
https://secondlife.com/

Virtual Currency

​While Second Life created their own virtual currency, the Linden Dollar (L$), we’ve since seen the rise of cryptocurrencies and blockchains like Bitcoin and Ethereum. If the crypto tech bros have their way, all future virtual worlds will use a cryptocurrency as opposed to a corporately managed virtual currency in hopes of increasing its adoption. As a business owner, I wouldn’t want to tie the success of my virtual world business directly to a crypto stock that is subject to mass speculation. (“Casino culture” as Chris Dixon says in “Read Write Own.”) So what do today’s virtual world developers do?

One answer might be for companies to create their own cryptocurrency on a blockchain like Ethereum. This is what Decentraland has done with their virtual currency, represented by the crypto token MANA. MANA is used to purchase virtual goods, avatars as well as purchasing virtual land.
Picture
Ethereum
https://ethereum.org/en/

The technological and societal phenomenon that is cryptocurrencies and blockchains is still unfolding at time of writing. Time will tell where the chips *er* tokens will fall.
 
Virtual Land

Part of Second Life’s success was the concept of owning virtual land. Your own plot of virtual dirt to build on as you see fit. This was quite attractive to the residents of Second Life as it gave them freedom of expression to share with the rest of the Second Life community. This became so popular that even corporations started hopping onto the band wagon to develop in-world spaces known as islands. This fueled the idea that virtual land will one day be scarce and thus hold tremendous value. This belief is still held today with companies like Earth2.io selling plots of land that they parceled out across a map of the real world. The problem is the value of this land is based on scarcity, and scarcity could easily disappear if a competing virtual world is developed.

Take mirror worlds, for example. If there are multiple mirror worlds, and others could be developed at any time, how will the virtual land continue to be scarce? I believe the answer lies in the fact that there is only one real Earth. By extension, in the long run, there will only be one mirror world. Don’t get me wrong though, until then there will be versions of mirror worlds that provide different capabilities. For example, one world may allow you to alter reality and fly anywhere you wish around the globe, play with magic in real-world settings, etc., while another mirror world may enforce the laws of physics to make it as realistic to our physical existence as possible.

This might seem like the position being taken here is that land in the mirror world will ultimately be scarce because there will eventually only be one mirror world. This is not the case. We need to explore mirror worlds further to fully understand the forces at play.
 
Mirror Worlds

As I laid out in my first blog post, “Hello Again, World,” my passion for mirror worlds runs deep. At Bending Time, I am tackling the technological challenges to deliver an accessible mirror world to the masses. However, ‘mirror world’ is still too general a term, which is why I wrote my article “Through the Mirror World.” There I divide mirror world into three categories:
​
  • Augmented World
  • Digital Twin
  • Lifelike Virtual World
 
Read the article but, in short:

  • The augmented world will be seen through augmented reality (AR) glasses, which consumer adoption is still a long way off
  • Digital twins have limited value in the broader scope of mirror worlds (except for grand visions like predicting climate and weather like NVIDIA’s Earth-2 project) – but they can be valuable for monitoring and improving complex and important systems
  • The lifelike virtual world can provide entertainment value, which will lead to learning and training opportunities (among many more in years to come)
 
Let’s look at a brief history of the [mirror] universe:

  • 1970: NASA uses the first digital twin to help the Apollo 13 astronauts return home safely from their lunar mission
  • 1991: The term “mirror world” was coined by Yale professor, David Gelernter, in his “Mirror Worlds” book
  • 1993: ART+COM developed a streaming 3D globe called TerraVision
  • Late 1990s: Intrinsic Graphics developed a spinning, zoomable 3D globe
  • 1999: Keyhole, Inc. was spun off from Intrinsic to continue development of the 3D globe
  • 2003: NASA releases WorldWind virtual globe
  • 2004: Google acquires Keyhole
  • 2004: OpenStreetMap is launched
  • 2005: Google releases a re-branded version of the Keyhole 3D globe called Google Earth
  • 2010: John Hanke and Google start Niantic Labs as an internal startup
  • 2011: Analytical Graphics Inc. (AGI) develops web-based 3D globe, Cesium
  • 2015: Sean Treleaven founds Bending Time Technologies Inc. to develop a lifelike virtual world 😉
  • 2015: Niantic Labs spun off from new parent, Alphabet Inc.
  • 2016: Pokémon Go is released and was well received across the globe
  • 2017: Ori Inbar outlines the concept of the AR Cloud
  • 2018: Google de-prioritizes development of Google Earth
  • 2019: AGI spins off Cesium as independent company
  • 2019: Kevin Kelly writes and publishes “Mirrorworld” in Wired magazine
  • 2024: Bentley Systems acquires Cesium
 
This brief history is intended to present the rough order of events in the history of mirror worlds. It is not intended to make a particular claim of exactly what happened and when. To be clear, I am in no way affiliated with any of the parties mentioned above.

You’ll also note there is no mention of William Gibson’s “Neuromancer,” Neal Stephenson’s “Snow Crash” or Ernest Cline’s “Ready Player One” because none of these novels describe mirror worlds. They all represent fictitious virtual worlds. This is a key point. Mirror worlds have not received the same attention as fictitious worlds. Some may argue, yes, they have because there is overlap like avatars, voice chat, multi-player, etc., but the fact remains we still can’t drive down a lifelike representation of the Amalfi coast in our virtual convertible, for example.

Let’s return to the question hanging in the air: why won’t virtual land in the mirror world become scarce and thus valuable? The answer is openness. The force for openness will ensure mirror worlds and, ultimately, our online digital presence is not controlled by a corporation. And if it's not controlled by a corporation, there won't be a single entity controlling and doling out virtual land parcels for a profit.

Force for openness? This might sound like an airy-fairy concept but it’s real. The internet and web emerged as open [protocol] networks as Chris Dixon espouses in his “Read Write Own.” Open-source software powers our digital economy. HTML 5 won over Adobe’s Flash. And more apropos of mirror worlds, the openness force is pushing on maps.
 
What’s next for mirror worlds? When will we get to drive the virtual Amalfi coast? I don’t have a crystal ball, but I can point out some reasons why they haven’t arrived yet.
 
For the augmented world, the simple answer is wide field-of-view, sunglass-like AR glasses are not technically possible yet. And while walking around with your phone in your hand catching Pokémon was novel at the time, it’s just too dorky for most people. Until the technology is there to provide people with a comfortable, appealing experience, Niantic must continue to find niche ways to make money.
Picture
Niantic Labs
https://nianticlabs.com/

​Regarding digital twins, on a global scale, the only thing worth noting here is NVIDIA’s Earth-2 project. I suspect other digital twins will be more localized to particular systems/products, where organization’s try to reduce costs in one form or another.
Picture
NVIDIA’s Earth-2 Project
https://www.nvidia.com/en-us/high-performance-computing/earth-2/

​This brings us to lifelike virtual worlds, which really started with 3D globes. Regardless of who invented the 3D globe, Google Earth was the eventual winner, seeing mass adoption in geospatial communities across the world. Michael Jones even claimed Google Earth had 400 million users during his keynote speech at GeoWeb 2008. RIP Michael. While this number is certainly debatable, we’ll just say millions of people use Google Earth on a regular basis.
Picture
Google Earth
https://earth.google.com/

For the past 10 years or so, there haven’t been a lot of updates to 3D globes. Cesium was spun off from AGI and was recently acquired by Bentley Systems to put more focus on Architecture, Engineering, Construction, and Operations (AECO). Earth2.io was launched in 2020 and continues to dupe young people into putting their money into virtual land parcels. And meanwhile, military training simulations continue to plod along, developing their products and tools to satisfy their sugar daddy’s whims (aka the DoD).
​
Seems ripe for the pickin’. Why hasn’t someone or some company jumped at the opportunity? Simply put, it’s hard. Not just technically but, maybe more so, commercially. It requires the technical AND financial cooperation of people across multiple industries including maps, games and simulations.
Picture
Lifelike virtual worlds – the gap between maps, games and simulations
There are other factors to the slow adoption of a lifelike virtual world. For instance, true-scale games are not attractive to most gamers. If it takes a gamer an hour to drive somewhere (like it would in real life), they will give up before they ever reach the destination. Today’s attention spans are too short to wait for the gratifying end. However, there is a smaller market of more patient users, and that is simulation games.
​
Thanks to Microsoft’s investment over the past few years, one of the best depictions of a lifelike virtual world today is the reincarnation of their Flight Simulator game. The screenshots and videos from the game are truly amazing. Will Microsoft then be the one to capture the entire lifelike virtual world market? Maybe. But I continue to push forward on Bending Time for two primary reasons: 1) Microsoft is a large corporation, and large corporations move slow, and 2) the omnipresent force of openness will continue to push corporations aside.
Picture
Microsoft Flight Simulator (MSFS)
https://www.flightsimulator.com/

The other main reason for the slow adoption of a planet-wide lifelike virtual world is pretty much all virtual activities/experiences can be developed on a local scale. If you want to develop a virtual African safari, just build a relatively small virtual environment in Kenya somewhere and populate it with zebra, tigers and the lot. Want to make a virtual tour of a hotel in Bali? Set up your development environment with some Bali maps and away you go.

People have been making these sorts of mashups for some time now. The friction comes in when you incur the same costs each time you set up the development environment for a new location.

On top of that, as I discussed in my “Our [Digital] Planet” article, accessibility goes a long way toward adoption. For example, when I first played MSFS, I had to install a fairly large initial download, and then follow-on patches, area content, etc., which was not a great first impression.
​
An accessible, planet-wide, lifelike virtual world using open data will be the first mainstream mirror world.
 
Metaverse
 
Speaking of mainstream, I haven’t really gotten into the discussion of the other ‘M’ word. After all, how could you talk about the business of virtual worlds without talking about The Metaverse? One of the reasons is mirror worlds do not typically come up in metaverse conversations. People naturally gravitate towards Stephenson’s version of the metaverse as he depicts in “Snow Crash.” Wagner James Au, an embedded reporter and author writing about Second Life, heavily leans into Stephenson’s metaverse, as I suspect Second Life’s creator, Philip Rosedale, also does. It makes for better fan fiction to think about magical powers and fantastical avatar outfits/skins. The reality of the situation tells a different story.

Residents in Second Life rarely use their real identities as I think they typically are escaping their real life and wanting to enjoy something new and fresh. (Citation needed.) In contrast, Facebook’s Mark Zuckerberg insisted on requiring its users to use real identities as Wagner James Au discusses in his “Making a Metaverse that Matters” book. With Facebook’s climb to Silicon Valley royalty, and Second Life’s decline to yesterday’s newspaper, I can confidently say that real life identities are associated with “mainstream” a lot more than false identities. It adds a level of responsibility and accountability on the user’s part.

Circling back to mirror worlds, the social aspect remains a relatively untouched subject. Wade Roush, a writer on new technologies, wrote “Second Earth” back in 2007 where he says “… many computer professionals think the idea of a “Second Earth" [a mashup between Second Life and Google Earth] is so cool that it’s inevitable...” He's right but the challenge of combining a persistent 24/7 virtual world host to thousands of simultaneous users with a 3D rich lifelike virtual world with cars driving down the virtual highways is too big to tackle all at once.
Picture
“Second Earth” by Wade Roush
https://www.technologyreview.com/2007/06/18/272040/second-earth/

There are different groups pursuing different paths on what they hope will be the road to the promised land of the metaverse. I am not really any different, but my interests remain at building a lifelike virtual world.
​
These paths are on different levels, creating/leveraging different technologies, all part of what one could consider a metaverse stack, if you will. McKinsey came up with a “10 layers” diagram that at least provides a bit of structure, which enables us to discuss different levels and technologies of the metaverse with some sort of common understanding.
Picture
10 layers of the metaverse – McKinsey & Company
https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/value-creation-in-the-metaverse
Each layer in this diagram above could be considered industries unto themselves. That’s how big of a concept the metaverse is. To get down to brass tacks, I think of current metaverse activities as falling into two categories: front end and back end.

The front end is in the top category in McKinsey’s diagram above but, more specifically, I think of it as the technology behind rendering a 3D virtual world. The game engine, the shaders, and all the rest of the stuff that goes into rendering a real-time game environment. This grossly over-simplifies McKinsey's representation.

The back end spans the latter three categories. It consists of blockchains used for identity, cryptocurrencies, cloud networks to host maps, virtual world servers, and a plethora of other server-side technologies that are needed to make virtual worlds run.
​
A main thrust in the back end today is massively multi-user. Following on the heels of MMOs and games like Fortnite, companies are working hard to develop a generic platform to host thousands and thousands of players online at the same time. A front-runner here is the super-funded startup Improbable, who recently launched their MSquared (m^2) metaverse platform.
Picture
Improbable’s MSquared (m^2) – Platform
https://www.msquared.io/
​In parallel, Neal Stephenson has teamed up with some of Silicon Valley’s cryptocurrency leaders and launched Lamina1, a metaverse platform with heavy focus on blockchain, user-generated content and interoperability.
Picture
Lamina1 – Metaverse Platform
https://lamina1.com/home

Both these platforms would appear to be aimed at fictitious virtual worlds and not lifelike virtual worlds. Although I’m sure Herman Narula of Improbable would argue his platform is generic and supports mirror worlds, too, and the proof is in the pudding with their Virtual Ballpark project. I could speculatively argue that MSquared is not architected with native geodetic coordinates, but he can really talk so I’ll pass.
 
Conclusion

Virtual worlds are a lucrative business. Key players have staked out key areas. But there are still opportunities if you look where others have not.

> n
You have entered a vast cave.
> e
There is a large rock in front of you with a bag of gold tucked behind it.
> get gold
 

Carpe diem.
 
-- Sean



Afterword

You probably noticed the near complete omission of virtual reality from this blog post. This was intentional as using VR goggles to immerse yourself in a virtual world arguably has little bearing on the development and business of the world.

I'm a big fan of VR and really look forward to launching VR capabilities in Bending Time's lifelike virtual world. But ultimately, I believe the value is in the world itself, not the device you use to join it.
0 Comments

Sun, Moon and Stars

8/27/2024

0 Comments

 
Introduction

Space: the final frontier. … To boldly go where no one has gone before. Those legendary words originally spoken by William Shatner as Captain James T. Kirk in Star Trek kindle a flame for many to this day. They truly embody our adventurous spirit.
​
Sometimes I get this feeling when building a lifelike virtual world, but I guess this would be a nerd’s final frontier. The thing is I love the outdoors, too. We can have both!
Picture
Stargazing in Canada
https://www.asc-csa.gc.ca/eng/blog/2018/06/29/13-amazing-stargazing-locations-in-canada.asp
Horizontal Coordinates

​When we look up at the stars, how do we describe their position in the sky? As you might imagine, we use an angle up from the horizon and the direction you are facing. These are known as horizontal coordinates. The angle up from the horizon is known as the altitude (or elevation) and the angle around the horizon is known as the azimuth, in degrees relative to Earth’s true north.
Picture
Horizontal Coordinate System
https://en.wikipedia.org/wiki/Horizontal_coordinate_system

The angle straight up from the Observer’s position is known as the zenith. And the angle straight down is known as the nadir.

Horizontal coordinates are very useful for determining the rise and set times of an object in the sky. When an object's altitude is 0°, it is on the horizon. If at that moment its altitude is increasing, it is rising, but if its altitude is decreasing, it is setting. However, all objects on the celestial sphere are subject to diurnal motion, which always appears to be westward. --https://en.wikipedia.org/wiki/Horizontal_coordinate_system.

Horizontal coordinates are relative to the Observer’s location on Earth. But how do we communicate a celestial object’s position independently of the location on Earth? This is where the celestial sphere comes in.
 
Celestial Sphere

​The fundamental construct for understanding where stars are located in the sky is called the celestial sphere. This is an imaginary sphere with Earth at its center and the North Celestial Pole and South Celestial Pole coincident with Earth’s poles as shown below.
Picture
Celestial Sphere
https://science.nasa.gov/learn/basics-of-space-flight/chapter2-2/

Coordinates in the celestial sphere are similar to coordinates for Earth. We describe latitudes on the celestial sphere as the declination (DEC) and longitudes as the right ascension (RA). For declinations, the celestial equator is 0°, and the poles are +90° and -90° just like latitudes on Earth. For right ascension, instead of a longitude angle in degrees, we describe the value in hours, minutes and seconds of time. Where 15° is 1 hour, which is the basis for time zones btw.

The zero point for RA is one of the points where the ecliptic circle intersects the celestial equator circle. It's defined to be the point where the Sun crosses into the northern hemisphere beginning spring: the vernal equinox, also known as the first point of Aries, often identified by the symbol of the ram. --https://science.nasa.gov/learn/basics-of-space-flight/chapter2-2/.
​
Together, the RA and DEC form a pair of equatorial coordinates.
Picture
Equatorial Coordinates
https://lco.global/spacebook/sky/equatorial-coordinate-system

​To better understand equatorial coordinates, here is a simple example showing the coordinates of a star in the sky.
Picture
RA-Dec Example
https://lco.global/spacebook/sky/equatorial-coordinate-system

To convert between [celestial] equatorial coordinates and Earth’s geographic coordinates, we need to determine the longitude of Earth below the vernal equinox. To do this, we must orient the celestial sphere to the Observer’s current day and time. (Remember the Earth is rotating within the celestial sphere.)

First, we calculate the RA of the Observer’s current day and time at Earth’s prime meridian, 0°, known as Greenwich mean sidereal time (GMST) using the approach described at https://aa.usno.navy.mil/faq/GAST. This involves understanding Julian dates, universal time and epochs, which I am going to omit here in an effort to not overload you with too much information.
​
Once we have GMST, we can calculate the Observer’s geographic longitude (or the star’s geographic longitude as depicted below) like so:

Geo Longitude = RA – GMST
Picture
Hour angle and its relation to right ascension and sidereal time
https://en.wikipedia.org/wiki/Equatorial_coordinate_system

The latitude is then simply the declination angle; however this is the geocentric latitude, not the geodetic latitude as I touched on in my Earth Coordinate Systems blog post. To convert the geocentric latitude to a geodetic latitude, we look at well-known iterative formulae such as the one described at https://www.mathworks.com/help/aeroblks/geocentrictogeodeticlatitude.html.
 
Ecliptic Coordinates

​The celestial sphere allows us to position objects in our sky but sometimes we want to know where objects are located within our Solar System. This is where the ecliptic coordinate system comes in.
Picture
Ecliptic Coordinate System
https://en.wikipedia.org/wiki/Ecliptic_coordinate_system

In the figure above, the Sun is at the center of the ecliptic coordinate system and, in this case, coordinates are referred to as heliocentric. However, we often like to know where the object is relative to Earth. In this case, imagine the same x-y-z axes shifted to the center of the Earth. These are then known as geocentric ecliptic coordinates.
​
In both cases, the ecliptic latitude and longitude are interpreted the same. Latitude is an angle up from the ecliptic plane and longitude is an angle east from the vernal equinox. But we do need one more parameter to locate the object in the Solar System space and that is distance. We don’t use distance for stars because we consider them infinitely far away in this case. In the figure below, the ecliptic latitude is denoted as B (Greek b for heliocentric and β for geocentric). And the ecliptic longitude is denoted as L (Greek l for heliocentric and λ for geocentric). The distance for heliocentric coordinates is denoted by D (Greek Δ), but we use often use ‘r’ for geocentric distances. (The ‘r’ refers to how many Earth radii an object is away.)
Picture
Heliocentric Ecliptic Coordinates
https://commons.wikimedia.org/wiki/File:Heliocentric_ecliptic_coordinate_systems.svg

​Galactic Coordinates

​Beyond our own star system, we can locate objects including stars, nebulae, etc. in our Milky Way galaxy using the galactic coordinate system. The galactic plane coincides with our galaxy’s disc as shown below. Our Sun is at the origin, the longitudinal axis runs through to the center of the galaxy, and the latitudinal axis runs parallel to the galactic disc. Similar to the other celestial frames, the galactic latitude is then the angle up from the disc and the galactic longitude is the angle eastward along the galactic equator.
Picture
Galactic Coordinates
https://en.wikipedia.org/wiki/Galactic_coordinate_system

I’m including a short description of the galactic coordinate system here mainly for the sake of completeness. The one practical application is to create a Milky Way skybox for our planet.
​
Most of us would’ve seen the NASA image of our galaxy below, or something similar. But how do we make this into a skybox?
Picture
Milky Way Galaxy
https://svs.gsfc.nasa.gov/4851/

​The first thing we need to know are the image’s coordinates. Well, it was intentionally processed to fit our galactic coordinate system. The image is centered on galactic longitude 0°, with longitude increasing to the left. The image below depicts the layout of the grid. Now imagine this grid (in rectangular form) overlaid on top of the Milky Way image above and, voilà, you’ve got it oriented.
Picture
Galactic Grid
https://cse.ssl.berkeley.edu/chips_epo/coordinates2.html

​Next we need to take the galactic image and fit it around our planet. To do this, we need to convert the galactic coordinates to equatorial coordinates, which we can do by using the formulae provided on Wikipedia and shown below.
Picture
Galactic coordinate conversion
https://en.wikipedia.org/wiki/Galactic_coordinate_system

From here, you need to create a cubemap of the image so it can be used as a skybox. Refer to the link in my Atmosphere/Sky Rendering blog post for details.

If you’re a keener and want to learn even more about galaxies, check out https://galaxiesbook.org/ by Professor Bovy at the University of Toronto. Looks like a fabulous resource to get your galactic geek on.
 
Supergalactic Coordinates

But wait, there’s more! Galaxies that is. Most of us have probably heard of the Andromeda galaxy but few of us would know where it is located relative to our own galaxy. This is where the supergalactic coordinate system comes in. Supergalactic coordinates have the cluster of galaxies that we are in at their center (origin) and then latitudes and longitudes defined relative to the supergalactic plane as shown below.
Picture
Supergalactic plane
https://en.wikipedia.org/wiki/Supergalactic_coordinate_system

Again, I include this coordinate system for the sake of completeness. I’m not sure when this would be used in a lifelike virtual world except maybe for locating other celestial objects beyond our galaxy as part of some virtual astronomy class or training exercise.
 
Implementing Celestial Coordinate Systems

Most of you reading this probably aren’t astronomy aficionados or galactic geeks and just want to get down to the nuts and bolts of it. Especially if you’re a software developer who just likes to make things “go.”
​
One of the best ways to do this is by buying the classic “Practical Astronomy with your Calculator” book. The latest version of the book (cover shown below) comes with a spreadsheet with examples of all the calculations Duffet-Smith and Zwart describe in the book. It truly is a very practical introduction to astronomy and celestial coordinates. Highly recommend. One of the lesser known but extremely valuable uses of the book is to verify your celestial calculations that you implement in your code are correct. We don’t often say it, but unit tests ftw here.
Picture
Practical Astronomy with your Calculator or Spreadsheet
https://www.amazon.ca/Practical-Astronomy-your-Calculator-Spreadsheet-dp-0521146542/dp/0521146542/

Accurate Celestial Calculations

I feel I should mention a more comprehensive and accurate method to calculate the positions of celestial objects and that is the software developed by scientists at the International Astronomical Union's (IAU) Standards of Fundamental Astronomy (SOFA) group.
Picture
Standards of Fundamental Astronomy
https://www.iausofa.org/

This group of international scientists and software developers has created a software library in Fortran 77 (really?) and ANSI C. It contains a complete set of functions to calculate all sorts of numbers about the cosmos, accounting for the subtleties of celestial movements like nutation, precession, etc. That’s the good news. The bad news is it’s not easily understood. If you have time and want to integrate a complete and accurate module of astronomical functions in your software, then point your browser ship to the link above and set sail my friend.
 
Sol

​Now that you are armed with battle-ready celestial coordinate conversions, we need to apply this to the primary light source on our planet.
​
NOAA’s Global Monitoring Lab website has a good page on solar calculations; straightforward with references to details including their basis of calculations, the seminal work by Jean Meeus, “Astronomical Algorithms,” which can be found at https://www.agopax.it/Libri_astronomia/pdf/Astronomical%20Algorithms.pdf.
 
If you want to try something a little simpler, go ahead and have a look at the Position of the Sun Wikipedia page.
Picture
Finding the position of the Sun --Wikipedia
Somewhat obviously, simpler also means less accurate. This is still a useful exercise to go through the simple calculations to learn the basics through hands-on application of the formulae. For example, to calculate the position of the Sun, you’ll need to account for/understand things including:
  • Julian dates
  • Mean longitude of the Sun
  • Mean anomaly of the Sun
  • Obliquity of the ecliptic
  • Atmospheric refraction
 
Up until now, I’ve rocketed past details like these with barely a mention. I intentionally filtered these out to try and lay down some ground-level understanding for you. It gets distracting real quick. There are tons of resources on the web so you will have no trouble going down the rabbit holes of your choosing.
 
One website that I quite like as a quick reference, as well as a good tool to verify the results of your Sun position calculations, is suncalc.org.
Picture
https://www.suncalc.org/
Luna

The next biggest source of light for our planet is our only natural satellite, the moon, aka Luna. We again look to Meeus’ “Astronomical Algorithms”, Chapter 47 in this case, to compute the position of Luna, which happens to have even more subtle effects on its position than Sol.

The basic approach for calculating the Moon’s position is the same as that for the Sun:
  1. Calculate the Moon’s ecliptic coordinates,
  2. Convert the ecliptic coordinates to equatorial coordinates (same method/formula used for the Sun),
  3. Convert to horizontal coordinates (or whatever coordinate system that you want, e.g., geodetic/ECEF).
 
This will give you the Moon’s position as well as a distance away from Earth’s center.

The developer of suncalc.org also has a site for the moon at www.mooncalc.org.
 
Stars

Scientists and physicists have been studying the stars for centuries. In modern times, we have several star catalogs that are commonly used today:
  1. Gliese Catalogue of Nearby Stars
  2. Yale Bright Star Catalog
  3. Hipparcos and Tychos Catalogues
  4. HYG Catalog
  5. Gaia Catalogue

There are more repositories of star data, but these are a few of the major ones. Refer to https://www.projectrho.com/public_html/starmaps/catalogues.php for a more complete intro to the various star catalogues and their purposes, limitations, etc.

For Bending Time, I used the HYG catalog as the data was good and contained enough information to reasonably render a nice-looking night sky. However, I never actually got around to the rendering part; I left the Milky Way skybox in place and that was it. Financing for rendering a beautiful and accurate night sky is tricky because the business case is dubious. As is the case with many other aspects of a lifelike virtual world, learning and training is where the eventual money is.
​
If we take a closer look at the HYG data, we can see the positions of the stars in the sky are encoded in equatorial coordinates, that is in right ascension (RA) and declination (DEC).
Picture
HYG Star Catalog
https://github.com/astronexus/HYG-Database

I described earlier in this blog post how to convert from RA-DEC to geodetic coordinates: lat, lon and elevation. And in my Earth Coordinate Systems blog post, I described how to convert geodetic coordinates to the Earth-Centered, Earth Fixed (ECEF) XYZ frame. So now you can orient your stars to your XYZ axes in your project.
 
Constellations

Space has captivated homo sapiens for a very long time. It’s no wonder that we drew shapes using prominent stars as key points.

Today, it can be fun for families and friends to go outside at night and spot constellations in the sky. This might seem like a casual experience, but this type of activity is important to foster the adventurous spirit of our young ones and fill their dreams with possibilities.
Picture
Winter constellations in the North
https://www.astronomy.com/observing/learn-the-constellations/

Somewhat obvious when you think about it, but constellations have been interpreted differently in history depending on the region of the world. Today, the International Astronomical Union (IAU) maintains a list of the 88 ratified constellations, with each one including the following information:
  • Name
  • Pronunciation of the name
  • Abbreviation
  • English Name
  • Genitive
  • Pronunciation of the genitive
  • Chart for screen view (GIF)
  • Chart for printing (PDF in A4 format)
  • Boundary coordinates (TXT)
Picture
Constellation chart for Ursa Major
https://www.iau.org/public/themes/constellations/

The open-source Stellarium planetarium software does a very nice job of displaying the night sky in a clean interface. In addition to the display of the stars, they include pictorial representations of the constellations in different international cultures. A great project that truly caters to the global community.
Picture
Free and open source planetarium for your computer - Stellarium
stellarium.org/
This concludes the science portion of this blog post. Let's move on to rendering all these celestial objects.

Rendering the Sun

Once you have the position of the Sun in your world coordinates (e.g., ECEF), you need to render it from the Observer’s perspective.

In many 3D games, the Sun is rendered as a disc in the sky either overtop of the skybox like a decal, or rendered behind the skybox but shining through. Then effects like brightness, halo, and lens flare to name a few are added to increase the realism.
Picture
Example Rendering of the Sun in the Sky
https://help.graphisoft.com/AC/19/INT/AC19Help/Appendix_Settings/Appendix_Settings-137.htm

​For a lifelike virtual world, for greater fidelity and to support more use cases, a 3D model of the sun might be a better approach. Images of the Sun can be found at NASA’s Solar Dynamics Observatory website: https://sdo.gsfc.nasa.gov/ including a 4k texture that can be used to wrap around a Sun sphere.
Picture
Full map of the surface of the Sun
https://svs.gsfc.nasa.gov/30362/
​As for the geometry, the Sun has a radius of approximately 700,000 km. In Bending Time, all the celestial objects are tessellated at runtime using the latitude/longitude grid as opposed to the spherical polyhedron approach.
Picture
Spherical geometry (latitude/longitude)
https://en.wikipedia.org/wiki/Spherical_geometry

It’s relatively straightforward to render a sphere based on the wireframe above. Choose your latitudinal and longitudinal resolutions (2-3 degrees yields reasonable geometry), calculate the vertices, calculate the UVs and then form all the triangles.

Note with this approach you do need to take special care at the poles as you may have seen in some 3D globes. This is discussed in the 3D Engine Design for Virtual Globes book.

Now you’ve got a 3D Sun model placed at the correct location in the sky, you need to scale the 3D object appropriately so the disc appears the correct size. I haven’t done this part but I imagine it’s not overly difficult based on the Sun being 93 million miles away and we know its size. Trivial for you now that you’ve mastered celestial coordinate conversion!
 
Rendering the Moon

​Rendering the Moon is very much like rendering the Sun, though there is a slight twist when it comes to the waxing and waning crescent shapes that we see.
Picture
Texture of the Moon
https://svs.gsfc.nasa.gov/4720

With radius 1737.4 km and the texture above, we can render the Moon the same way we rendered the Sun. And we know how far away the Moon is from Meeus’ algorithm so we can scale it accordingly, too. Further, NASA provides elevation data of the Moon in addition to the imagery above so we can have a bumpy Moon surface if we want. This will get us a nice spherical Moon but we also need to worry about when we only see a portion of the Moon’s illuminated surface. Great explanatory page on this at https://science.nasa.gov/moon/moon-phases/ btw.
 
In both Meeus’ “Astronomical Algorithms” and Duffet-Smith’s and Zwart’s “Practical Astronomy with your Calculator or Spreadsheet,” they provide formula for calculating the phase of the Moon, specifically the portion of the Moon that is visible known as the bright limb. The results of the calculation are a percentage where 0% is a new Moon and 100% is a full Moon, as well as a position angle as the bright limb will appear slightly rotated depending on where you are located on Earth and the phase the Moon is in.
​
I found a somewhat recent thesis researched and written by Alexander Kuzminykh at the Hanover University of Applied Sciences and Arts dedicated to this topic so I will point you there for what appears to be a nice, modern, Physically-Based Rendering (PBR) approach to rendering the Moon including its phases.
Picture
Physically Based Real-Time Rendering of the Moon
https://serwiss.bib.hs-hannover.de/frontdoor/deliver/index/docId/2105/file/Bachelorarbeit_Alexander_Kuzminykh_20210818.pdf

Rendering the Stars

​Last but not least we need to render the stars. One of my favourite software applications for viewing space (in general, not just from Earth’s surface) is Celestia. It has a clean user interface, complete catalog of celestial objects and nice rendering of the stars, and planets!
Picture
Celestia
https://celestiaproject.space/

Like Stellarium and Celestia, a lifelike virtual world will soon want to do away with the Milky Way skybox and move to rendering stars one at a time. We have the data and we can now position the stars at their correct locations in the sky, what’s remaining is the brightness, colour and other effects like glint and twinkle.
​
Going back to the HYG star catalog example, we can see that in addition to a star’s position, the visual magnitude, the spectral type and color index are provided.
Picture
HYG parameters including Magnitude, Absolute Magnitude, Spectrum and Color Index
https://www.astronexus.com/hyg

​We use the visual magnitude to determine the size of the star when rendering. This includes some level of glint as shown in the screenshot from the SIGGRAPH 2001 paper “A Physically-Based Night Sky Model” by Jensen, Durand, Stark, Premože, Dorsey and Shirley of MIT. The spectrum and color index are then used to determine what colour to render the star.
Picture
A Physically-Based Night Sky Model
https://graphics.stanford.edu/~henrik/papers/nightsky/nightsky.pdf

As far as twinkle goes, I have never heard of anyone rendering this at runtime. I do recall the reason for twinkling is due to our atmosphere. The folks at EarthSky have a great graphic depicting the reason at https://earthsky.org/space/why-dont-planets-twinkle-as-stars-do/.

If we look at other, more complete star catalogs like the European Space Agency’s Gaia star catalog (https://gea.esac.esa.int/archive/), which contains information about 2 billion stars in our galaxy, data management obviously becomes an issue.
​
For Bending Time, the client is intended to be very lightweight with all the data residing in the cloud. This architectural concept applies to star data as well. My idea here draws from the approach taken for map data… tile it! In the case of stars, the “horizontal” axis of the tiling grid would be our sky, whether it’s the celestial sphere or the galactic grid. Then for resolution, or depth into space as it were, we could use the star’s brightness. The brighter the star, the sooner it shows up to an Observer. Conversely, the dimmer the star (let’s say not visible to the human eye), the “later” it shows up, like in a virtual telescope scenario for example. The devil is in the details, but the general concept remains the same: load a chunk of star data from a portion of the sky the Observer is interested in.
 
Conclusion

Well, that was a lot of information in a short amount of time. Congratulations if you made it this far.
​
I hope you gained a better understanding of life beyond our atmosphere and can now appreciate all that goes into rendering the cosmos from our pale blue dot.
 
Ad astra!
 
--Sean


References
 
Coordinate Systems:
  • https://en.wikipedia.org/wiki/Epoch_(astronomy)
  • https://aa.usno.navy.mil/data/JulianDate
  • https://aa.usno.navy.mil/faq/GAST
  • https://en.wikipedia.org/wiki/Horizontal_coordinate_system
  • NASA Ref. Systems: https://science.nasa.gov/learn/basics-of-space-flight/chapter2-2/
  • https://lco.global/spacebook/sky/equatorial-coordinate-system
  • https://en.wikipedia.org/wiki/Ecliptic_coordinate_system
  • https://en.wikipedia.org/wiki/Galactic_coordinate_system
  • Star Map: https://svs.gsfc.nasa.gov/4851/
  • https://cse.ssl.berkeley.edu/chips_epo/coordinates2.html
  • https://en.wikipedia.org/wiki/Supergalactic_coordinate_system
  • https://galaxiesbook.org/
  • https://www.amazon.ca/Practical-Astronomy-your-Calculator-Spreadsheet-dp-0521146542/dp/0521146542/
  • Meeus: https://www.agopax.it/Libri_astronomia/pdf/Astronomical%20Algorithms.pdf
  • SOFA: https://www.iausofa.org/
 
Sun:
  • https://sdo.gsfc.nasa.gov/
  • https://www.suncalc.org/
  • Simple position: https://en.wikipedia.org/wiki/Position_of_the_Sun
  • Sun position: Ch 25 in Meeus’ “Astronomical Algorithms” above
 
Moon:
  • https://www.mooncalc.org
  • https://science.nasa.gov/moon/moon-phases/
  • Moon position: Ch 47 in Meeus’ “Astronomical Algorithms” above
 
Stars:
  • https://www.asc-csa.gc.ca/eng/blog/2018/06/29/13-amazing-stargazing-locations-in-canada.asp
  • https://www.projectrho.com/public_html/starmaps/catalogues.php
  • Gliese catalog: https://vmcoolwiki.ipac.caltech.edu/index.php/Gliese_Catalog_Explorations
  • Hipparcos and Tycho catalogs: https://www.cosmos.esa.int/web/hipparcos/catalogues
  • HYG catalog: https://www.astronexus.com/hyg
  • Gaia catalog: https://gea.esac.esa.int/archive/
  • https://earthsky.org/space/why-dont-planets-twinkle-as-stars-do/
 
Constellations:
  • https://www.astronomy.com/observing/learn-the-constellations/
  • https://interestingengineering.com/science/what-are-constellations
  • https://spaceplace.nasa.gov/constellations/en/
  • https://www.iau.org/public/themes/constellations/
 
Rendering

  • Sphere tessellation: https://en.wikipedia.org/wiki/Spherical_geometry
  • Ellipsoid tessellation: https://virtualglobebook.com/
  • Sun texture: https://svs.gsfc.nasa.gov/30362/
  • Moon textures and surface maps: https://svs.gsfc.nasa.gov/4720
  • Night Sky: https://graphics.stanford.edu/~henrik/papers/nightsky/nightsky.pdf
  • Moon: https://serwiss.bib.hs-hannover.de/frontdoor/deliver/index/docId/2105/file/Bachelorarbeit_Alexander_Kuzminykh_20210818.pdf
  • Stars: https://en.wikipedia.org/wiki/Color_index​
0 Comments

Atmosphere/Sky Rendering

8/21/2024

0 Comments

 
Introduction

​Ever since the first human was hurled into space, the view of our planet from above has captivated the imagination of millions. That thin layer of atmosphere, clouds like blankets and the sun beaming off the surface; I can’t wait to experience this for myself one day. Until then, I shall re-create it in digital form!
Picture
Earth from Space
https://www.nasa.gov/image-article/space-station-view-of-sun-over-earth-from-space

Atmosphere

When I refer to the atmosphere, specifically the rendering of the atmosphere, I am referring to an above-planet view like is shown above. The focus of this section is that thin, fuzzy layer of air mainly composed of nitrogen. It’s weird to say it like that but it’s true.
​
One of the first real eye-opening solutions for rendering the Earth’s atmosphere (at least for me) was Sean O’Neil’s solution published by NVIDIA in “GPU Gems 2” in 2005.
Picture
Chapter 16. Accurate Atmospheric Scattering – GPU Gems 2
https://developer.nvidia.com/gpugems/gpugems2/part-ii-shading-lighting-and-shadows/chapter-16-accurate-atmospheric-scattering

In his article, O’Neil expands on Nishita et al’s work from their 1993 seminal paper “Display of the Earth Taking into Account Atmospheric Scattering,” still available at http://nishitalab.org/user/nis/cdrom/sig93_nis.pdf. As I quickly read through O’Neil’s article years ago, I was astonished people had gone to such great lengths since the 1980s to wrestle with equations for Rayleigh and Mie scattering to simulate light passing through our atmosphere. Besides interpreting Nishita et al’s work, imo one of the big accomplishments O’Neil achieved was making a real-time solution. Great work, Sean! (And great name. 😉)
​
Since O’Neil’s work, there have been many others to follow the pursuit of rendering a realistic planetary atmosphere, notably Lukas Hosek and Alexander Wilkie, Eric Bruneton, and later Sébastian Hillaire.
Picture
Hosek-Wilkie Sky Model
https://cgg.mff.cuni.cz/projects/SkylightModelling/HosekWilkie_SkylightModel_SIGGRAPH2012_Preprint.pdf

Picture
Precomputed Atmospheric Scattering – Eric Bruneton
https://ebruneton.github.io/precomputed_atmospheric_scattering/

Picture
A Scalable and Production Ready Sky and Atmosphere Rendering Technique – Sébastien Hillaire – Epic Games, Inc.
https://sebh.github.io/publications/egsr2020.pdf

Bending Time implemented the O’Neil solution (a co-op at the time, thanks Ramit!) using the same texture generation/lookup in Unity. The results were good, though it does use spheres for the outer and inner limits of the atmosphere, which I would prefer to be ellipsoids but, so far, it hasn’t been that noticeable.
 
Sky

​When I refer to sky, I refer to the case when the Observer is on the surface and looking up. In many Cartesian-coordinate-system-based games, a skybox is used. Skyboxes are commonly provided as cubemaps.
Picture
Skybox as a Cubemap
https://learnopengl.com/Advanced-OpenGL/Cubemaps

These work well because they occupy the background of a scene where the user is not typically focused. And they’re cheap to render. However, for a lifelike virtual world, we need a realistic and accurate sky.
​
O’Neil’s solution is as far as we got on Bending Time. The sky as viewed from the surface wasn’t properly verified (and likely does not work well in all cases). One thing I did notice was that the transition from space to ground (i.e., atmosphere to sky) was non-trivial and contained artifacts. It would appear one continuous model for both atmosphere and sky is the best option for a lifelike virtual world, but this requires the proper funding and business case to pursue further.
 
Clouds

Bending Time has not worked on rendering clouds, but I do have some background here as my team on the Aurora program back in 2006 modeled 3D volumetric clouds for the weather radar simulation. Though, admittedly, rendering fluffy white 3D clouds with soft edges, etc. is a whole ‘nother level.

To my surprise, when I stumbled upon the website of the Computer Graphics Lab at the University of Tokyo as part of writing this article, Professor Nishita and his students have done quite a bit more work on rendering natural scenes beyond atmosphere. Check out http://nishitalab.org/index-e.html. You’ll see further work on rendering natural phenomena including clouds.

More recently, I came across Felix Westin’s work on his UHawk flight simulator game. A side-project of his that looks amazing. He clearly has spent quite a bit of time working on rendering 3D clouds, so much so that he created a Unity asset for volumetric sky, lighting and clouds called Overcloud.
Picture
Overcloud by Felix Westin
https://overcloud.me/

And my most recent Googling of rendering 3D volumetric clouds turned up a resource posted on GitHub by Piyush Verma, which can be found at https://gist.github.com/pixelsnafu/e3904c49cbd8ff52cb53d95ceda3980e. Here he lists resources that he collected on the subject.

And this is where I will leave you on the subject. I’m sure you were hoping to see all sorts of nitty-gritty details about rendering 3D volumetric clouds but that’s part of the problem with communicating the challenges of a true-scale, planet-wide virtual world. You see all these nice papers and videos but very few of the techniques/solutions work at planet scales. Like I mentioned in my Forest/Ground Rendering blog post, the vast majority of game developers work in a scene editor with all the data they need for that scene on disk.
​
I’m not afraid of the challenges ahead but I have to be smart about it. Rendering 3D volumetric clouds is expensive and must be tied to business in order to be funded. Until then, I’ll keep my “caput in nubibus.”
 
--Sean
0 Comments

Forest/Ground Rendering

8/17/2024

0 Comments

 
Introduction

There are literally trillions of trees on our planet. A typical mountainside in a densely forested area like British Columbia could be host to over a million trees. Rendering the corresponding scene in a virtual 3D environment obviously has its challenges.
​
In this blog post, I take a closer look at rendering all these trees and related ground cover like grass as part of the ongoing lifelike virtual world series.
Picture
Global Tree Map
​https://www.washingtonpost.com/news/energy-environment/wp/2015/09/16/the-countries-of-the-world-ranked-by-their-tree-wealth/
Forest Data

​When determining where to place a tree geographically in a lifelike virtual world, one typically looks to a forest density or tree cover map of sorts. With the attention the environment receives these days, you can imagine there is no shortage of data about our planet’s forests. In British Columbia, especially with the onset of forest fires, there are numerous resources for forest data like the species map shown below.
Picture
BC Tree Species Map
https://www2.gov.bc.ca/gov/content/industry/forestry/managing-our-forest-resources/forest-inventory/data-management-and-access

These forest maps are good for relatively broad areas but when you get down to the surface and look to place a tree at a particular location, you need more information about the terrain like the slope, and whether there are other features that would preclude placing a tree at that location, like a river. To achieve this, one can generate forest maps offline by combining layers like forest density maps, canopy height maps, species maps, road data, river data, etc., where in the last two cases, for example, the roads and rivers would act as “masks” to indicate areas where no trees should be placed. This type of activity is already being done by scientists and forest managers around the world to better understand our global forest inventory and aid in making timely and informed decisions about climate, conservation and logging.

These maps are often derived from freely available satellite imagery. However, mapping forests from an aircraft using a Light Detection and Ranging (LiDAR) scanning system is starting to become mainstream. In this scenario, the region’s governing body typically will hire a third-party contractor to fly the area of interest, process the data and provide the government customer with a dataset that can be browsed on a PC. These datasets are often processed down to sub-meter resolution, as high as 1cm between points. With this high-resolution data, individual trees start to show up.
Picture
LiDAR dataset shows tree cover
https://www.nsnews.com/local-news/north-van-firm-looks-to-measure-tree-canopies-from-space-5411760
​If we take this a step further, one can imagine it won’t be long before we have a tree inventory database that contains the location and information about every tree in a region. This is already happening in high-focus areas like BC where the province is developing an Individual Tree Database as part of their LiDAR BC program.
Picture
Individual Tree Database – LiDAR BC
https://lidar.gov.bc.ca/pages/program

However, many other areas around the world are not at this level of tracking. So, we still need to apply maps and heuristics to place trees such that the virtual mountain looks correct and appealing.

Having said that, a potentially useful concept to manage tree data is to design a format to contain the individual tree locations as well as the tree’s characteristics. This is what the ESRI shapefile was very good for back in the day, defining locations and accompanying attributes. Today, the 3D Tiles specification might be more appropriate to store individual tree locations, especially because it adds a pointer to a 3D model to render the tree. More on this later. The point being here is once the format for capturing individual trees is defined and implemented, third party forest management organizations like the Province of BC could upload their own individual tree database thus replacing the generalized positions with actual sensed/analyzed positions, but that’s a future endeavour.
 
Forest Rendering

Now that we know where to place the trees on the terrain, we need to determine how to render them. If you look at what current game engines do today, the approach is to place a 3D tree model at its desired location in a scene editor. This tree model will then have multiple levels of detail (LOD) that will be managed by the game engine at runtime. In short, when you are far away from the tree, a low level of detail is used and as the observer gets closer, the game engine introduces progressively higher levels of detail until the observer is close enough to see the details in the bark. And with all the optimizations these game engines have implemented over the years, a game developer can get pretty good performance rendering thousands of trees in a scene. But an avatar flying around a true-scale planet sees a lot more than thousands of trees. And while Unity and Unreal could certainly handle more, the general architecture is not designed to handle millions of trees at once.
Picture
Levels of detail of a 3D tree model
One of the issues is placement. With game engines, tree placement is done ahead of time in an editor. When rendering the planet’s trillions of trees, we obviously need to get a little craftier. This is why we use forest maps and other data to generate tree positions at runtime on the GPU. But once placed, we still need to render a 3D tree, or at least something that resembles a tree.

A simple approach by many games to rendering many trees at once is to use “billboards,” which is a simple 2D trick used to mimic a 3D tree. The lowest level of detail in the example above is usually a billboard.
Picture
Tree billboard example
A similar approach to billboards is the use of imposters. While a billboard is a single 2D image of an object that is rotated to always face the camera, imposters provide multiple 2D images to capture the look of the object from different vantages. The imposter displays the 2D image for the camera-object view angle.

Billboards and imposters, can go a long way to rendering a massive number of trees but the runtime overhead of placement and rendering combined may still cause some visual glitches to the user. You may have seen this in large-scale virtual worlds where the 3D trees “pop” in and out of the scene, similar to the terrain popping I described in my Terrain Rendering blog post. We can use fading and other techniques to help minimize the popping but this still does not alleviate the issue 100%.

Regardless of whether you’re rendering a 3D model of a tree or a billboard, GPU instancing is now commonplace on most video cards. This involves uploading a renderable object to the GPU and then instructing the GPU to render many, many instances of this object at specified locations. The optimizations in the graphics libraries and the hardware itself are impressive indeed resulting in tens if not hundreds of thousands of objects rendered per frame. I raise this point to make you aware this is the core technique to be used to render thousands of trees but, even still, due to placement AND rendering, is still not sufficient for our planet-scale world.

During my many drives around BC (highly recommend btw), I would often look out the car window and see the beautiful landscapes. Snow-topped mountains covered in vast numbers of trees, with exposed cliff faces, waterfalls and shadows adding depth and beauty to the scene. When I looked at all those trees, I couldn’t really see the individual trees at that distance; it looked more like a shag rug covering the hillside. I thought “Ah ha! Why not render a shag rug at very long distances?” This concept would pop back into my head every now and again until one day, when I watched a video by Inigo Quilez, where he planted a forested landscape using a similar approach.
Picture
Painting a landscape with maths – Inigo Quilez
https://iquilezles.org/live

The technique Inigo presents (in a simple and easy to understand way I might add – he’s great) produced excellent results as can be seen above. However, it may not quite be the shag rug approach that I was imagining. Regardless, as usual, he offers inspiring methods that can be used as food for thought to expand the shag rug concept. Something I will do in the not-so-distant future.
​
Other aspects of rendering the forest are the tree species, environment and seasons. In many video games, especially higher end AAA games, they incorporate the idea of biomes: “A large naturally occurring community of flora and fauna occupying a major habitat.”
Picture
Biomes in Ghost Recon Wildlands
https://666uille.wordpress.com/wp-content/uploads/2017/03/gdc2017_ghostreconwildlands_terrainandtechnologytools-onlinevideos1.pdf

Each biome in a video game would typically contain its own set of 3D models, materials, textures, shaders, etc. representing the “look and feel” of that particular biome. The results are amazing. From Ghost Recon Wildlands to Red Dead Redemption 2, some of the scenes are jaw-dropping. I can’t wait to get Bending Time’s lifelike virtual world to that point, though the approach might be different still.

With all the map data around the globe, we may not need to constrain ourselves to a fixed set of biomes. Instead, we might be able to procedurally generate and place 3D models according to the map data. For example, we know the average temperature and humidity levels for every area of the globe, we can use this data (along with other data as needed) to manipulate or select the models, materials, etc. for that area.
​
The procedural algorithms to do this work are non-trivial, especially to create visually appealing and coherent scenes like those seen in AAA games. This is exemplified by the video game developers still needing artists to correct and polish the final scene. My idea for Bending Time is to use the lifelike virtual world user community effectively as the artists in this case. An ideal scenario is Bending Time hosts the base data and users can correct it, like a 3D globe Wikipedia of sorts. This is a bit of a moonshot idea but hopefully we’ll get there one day.
 
Grass Rendering

​A closely related topic to forests is grass rendering. When it comes to grass, you can easily appreciate the fact there are orders of magnitude more blades of grass on the planet than there are trees. The good news is they are only applicable in a 3D scene up to a certain distance. Grass has very little contribution to a scene when it is, let’s say, 10 km away from the observer.

Brano Kemen wrote a very nice article years ago about the grass rendering in Outerra. He used three distance-from-camera levels of fidelity. It’s amazing he wrote this article 12 years ago! In fact, his whole Outerra project was way ahead of its time.
Picture
Procedural grass rendering in Outerra
https://outerra.blogspot.com/2012/05/procedural-grass-rendering.html

One of the important reasons to render grass in a game is to cover up flat and often repetitive ground textures. Having waving grass effectively brings the ground to life increasing the realism and overall immersion of the user into the scene. There are numerous other resources available on the topic, but Kemen’s article lays out the basics, so I’ll leave the actual rendering part at that for now.
​
Regarding the placement of grass, the geographic areas not the blades, it is similar to forests in that we use map data to determine what areas of the ground are covered in grass. A common source of data to determine what’s on the ground is called land use or land cover [classification] data. Most nations maintain data on their citizens’ use of the land to support a variety of planning activities. For example, Canada maintains land cover data to track the percent of land being used for agricultural purposes. Below is a snippet of this data for the Metro Vancouver area.
Picture
Land cover data for the Metro Vancouver area
https://open.canada.ca/data/en/dataset/16d2f828-96bb-468d-9b7d-1307c81e17b8

​This data is processed from open satellite imagery, where the source data often has a resolution of 30 meters. It seems like this could be useful for our grass placement but, when you zoom in, there are so many variations to what is actually on the ground that this resolution of land cover data is close to useless when it comes to placing and rendering grass. For example, if we look at the highlighted red square on the right-hand side of the above image, we can download a much higher resolution optical aerial image in that region to explain this point.
Picture
Optical Aerial Image from the Township of Langley’s Open Data Portal
https://data-tol.opendata.arcgis.com/

An aerial image like this gives us a much better sense of what is on the ground. Looking at what appears to be a walking path in the upper portion of the image, imagine virtually walking down that path and seeing grass rendered in the field to the left and right of us. This makes sense; the border between the path and the grass seems clear. But now imagine walking down the street in the little suburban area. It becomes less clear exactly where the road ends and the grass on the front yards starts. This level of fidelity in mapping seems crazy but, as a society, civilization even, we are getting there.

Having said that, with grass, the placement can be generalized even further because not too many people remember a particular patch of grass in their fondest memories, like they might in the case of a memorable tree. For forested regions, the ground cover is particularly not memorable, which is why generalized biome data, like that from AAA games, might be a good way to start for a lifelike virtual world.
 
Ground Textures

​Obviously, there are many other things on the ground that make up a virtual scene. In pretty much every video game that has terrain, the textures used on the terrain mesh form the foundation of what to render everything else on top of. From this perspective, this blog post is backwards, working its way down from the forest canopy to the ground. Going backwards tracks to my personality, but I digress.

When texturing the ground, game developers and artists typically use a “tileable” texture. A tileable ground texture would typically span about 2-3 meters of ground cover and then repeat itself when you get to the border of the image.
Picture
Tileable ground texture
https://www.independent-software.com/tileable-repeatable-hires-terrain-textures-for-download.html

​One of the downsides to tiling ground textures is the tile grid starts to make itself apparent over larger distances as shown below.
Picture
Tiling artifacts
https://discussions.unity.com/t/prevent-annoying-tiled-textures/574917

​A common approach to reduce tiling artifacts is texture splatting. This is the process of selecting a set of materials to represent the ground (say 4) and using a control image to indicate which materials, and to what extent, should appear at each pixel in the control image. This requires the ground textures to support transparency. The results can be quite effective.
Picture
Texture splatting
https://habr.com/en/articles/442924/

​Another ground texture technique used to enhance our terrain is bump mapping. Bump mapping is a 3D technique for rendering realistic textures on surfaces that otherwise would look flat.
Picture
Terrain bump mapping
https://shadedrelief.com/3D_Terrain_Maps/3dterrainmapsbum.html

​This is all fine and dandy but how do we *ahem* mesh together aerial imagery from the mapping industry with ground textures from the video game industry? If we are using imagery that is less than a few meters in resolution (the Township of Langley image shown previously has a resolution of 7.5 cm) then I think it’s clear that the aerial imagery should form the basis of our ground texture. The key will be in blending in colour-appropriate ground textures and applying bump maps to increase the visual fidelity of the terrain.
​
This is where I left off on the subject when I was working on Bending Time back in 2021. I will pick this up as part of the enhanced terrain work I’ve mentioned before. I look forward to showing you some results!

Videte silvam ad arbores.
 
--Sean
0 Comments

PBR Tester

7/3/2024

0 Comments

 
As promised, following up from my Ocean Rendering blog post, I am providing my Physically-Based Rendering (PBR) test project in this post. A zip file containing the simple Unity project can be found here.
 
As discussed in my previous posts, a game engine’s rendering pipeline tends to not help us out when rendering the whole planet for a variety of reasons, one of which being computing the ocean wave geometry in ECEF or ENU coordinates, for example. This is why I’m implementing my own PBR solution so I can embed custom coordinate calculations in the pipeline.
 
There are many PBR references on the web. I primarily used the following resources:
  • https://learnopengl.com/PBR/Theory
  • https://www.jordanstevenstechart.com/physically-based-rendering
  • http://graphicrants.blogspot.com/2013/08/specular-brdf-reference.html
  • https://www.pbr-book.org/
 
I used Unity 2021.3.11f1 LTS but the project is so simple it should be trivial to go to an older version of Unity. When you first open the project, open the TestPbr scene and you'll see a sphere off to the side of the main camera and the light as shown below.
Picture
Unity PBR Test Project
​There is a separate Test Viewer game object that you can move around independently from the light to get various vantages.
 
When you run the project, in the Game view, you can see the sphere being lit up by a shader. Select the TestPBR game object in the Hierarchy panel and you will see the little UI I put together in the Inspector to test Bending Time’s basic PBR rendering.
Picture
PBR Test with Inspector UI
​The first section is the Shading model. I provide the following "complete" models:
Picture
Complete Shading Models
I say complete models because this section controls all the presets for the remaining sections, which I describe below. They are complete shading models.
 
The next section allows you to select the primitive being shown. The sphere is all I’ve really used.
 
Next we have the light colour and intensity. So far, pretty straightforward.
 
The next three sections provide options to play with the light contributions from the three main sources: Diffuse, Ambient and Specular.
 
For Diffuse, I implemented basic Lambertian shading as well as the Oren-Nayar model.

For Ambient, I simply provided a “constant” model where you just set the intensity (or % contribution) of the ambient light.
​
The interesting part is in the Specular section. I modeled the specular using the typical contributions from the:

  • Normal Distribution Function (NDF),
  • Geometric Shadowing Function (GSF), and
  • Fresnel effect.
Picture
Specular Contribution Options
​I chose ranges and presets based on the linked-to resources above and what seemed to make sense. It seemed like I should be able to create a decent looking PBR sphere based on this but it seems to fall short of what PBR screenshots I’ve seen.
Picture
Cook-Torrance Model
​I would love for someone to look at this project and see what could be improved/fixed. I’m also providing it with the hope that other people can learn about PBR shading by using and experimenting with it.
 
Lux terrae.
 
--Sean
0 Comments

Ocean Rendering

6/30/2024

0 Comments

 
Introduction

Rendering the ocean in video games has occupied the mind-space of programmers, researchers and dreamers alike for decades. Making realistic waves on water surfaces in real-time is now commonplace in today’s AAA games. Even lower-end games played on mobile devices demonstrate realistic looking water surfaces. So, the problem of rendering waves is more or less solved… on a local scale. But what isn’t commonplace is seeing an animated ocean surface on a planetary scale. In this blog post I talk about where I am at on this topic with Bending Time as well as outline the ocean features that will be investigated in the future.
 
Ocean Height

​As soon as we step back to rendering the whole planet, the ocean rendering problem space changes. It’s no longer just a matter of generating the wave geometry, shading it and adding other effects like reflections, etc. One of the main reasons is the large-scale coordinates you need to model things planet-wide, which I discussed in my Earth Coordinate Systems blog post. Further, spherical-based coordinates make it more difficult to frame the problem space of generating the wave geometry. If one models the planet in the ECEF frame, one quickly sees the math is non-trivial to simulate the ocean surface. Computing the surface accurately, with high fidelity, involves the forward conversion of geodetic coordinates to ECEF for every vertex of the ocean surface mesh. On top of this, implementing, e.g., the simplest Gerstner waves as described in the classic Effective Water Simulation from Physical Models written by Mark Finch and published in NVIDIA’s original GPU Gems book, can be challenging primarily because Mark, like most if not all water simulation developers, treat the ocean as having Z-heights above an XY plane in a 3D Cartesian coordinate system.
Picture
Gerstner waves in Mark Finch’s “Effective Water Simulation from Physical Models” in GPU Gems
​The immediate thought of most game developers will be to “just hack it;” do the wave simulation in a separate 3D Cartesian coordinate frame and place/orient it on the Earth’s surface so it looks appropriate. This perennial, favourite, game developer strategy is effective to solve this problem but it’s not a seamless solution. For example, it would be difficult to do this for the whole planet all at once. One might argue that you don’t need individual waves at the whole-planet scale, which is true but there are other considerations. To start, the question of whether to render the ocean as a “mega mesh” or to use tiles like is done in the mapping world comes to mind.
​
For non-geospatially-oriented developers, using a mega mesh with “sea level at height zero” might be their first tack. This works okay for viewing the entire planet at once but, when you are close to the surface, one sees that height zero, which typically is an ellipsoid height of zero, doesn’t always align with the land. In fact, it rarely does. This is why 3D globes don’t try and render the land-ocean interface because the coordinates don’t line up. In addition to the land-ocean interface issue, the ocean isn’t a perfect circle.
​
The ocean’s surface is affected by several factors but the biggest is Earth’s variable gravitational pull across the planet. These variations cause the ocean surface to be lower or higher depending on whether there is greater or less gravitational force respectively. Scientists refer to the shape of the ocean surface under the influence of the gravity of Earth, not accounting for other factors such as wind and tides, as the geoid.
Picture
Earth Gravitational Model from Wikipedia
​There are many models of the Earth’s geoid used for a variety of purposes. When defining the global height of the ocean surface, the Earth Gravitational Model of 1996 (EGM96) still serves as a lightweight model (lightweight in terms of computer memory space) used in modern devices like handheld GPS receivers. (The height displayed by a handheld GPS device is typically above mean sea level (AMSL), which is a height relative to the geoid, but this is tangent to the discussion here.) Beyond EGM96, there is EGM2008 and soon to be released is EGM2020, which provide enhanced accuracy and precision to the model.
In the cases of EGM96 and EGM2008, NASA has processed the models into tiles. EGM96 is available in 15’ x 15’ (1/4 degree) tiles and EGM2008 is available in 2.5’ x 2.5’ tiles from https://earth-info.nga.mil/index.php?dir=wgs84&action=wgs84.
Picture
EGM2008 raster tiles from NGA
​As can be seen, the format of the tiled EGM model looks like a heightmap. The heights are encoded as floating point values above or below the reference ellipsoid, WGS84 in this case, making it a type of Digital Elevation Model (DEM). From here we can deduce the height of the ocean surface is the ellipsoid height of zero plus or minus the geoid height as depicted in the following figure.
Picture
Relationship between orthometric height, ellipsoid height, and geoid height
This global model of ocean surface height is still an approximation. If we were standing on a virtual beach looking at the waves come in, this representation of the height of the surface would still not be accurate enough to make for a nice-looking scene. A few meters up or down could result in the virtual area being flooded or our virtual avatar looking at a fairly withdrawn beachfront. In order to achieve beach-level quality for the ocean surface height, we look to nautical charts.

Nautical (or marine) charts define the height of the ocean’s surface at a much more accurate scale as you can imagine is needed for the safe navigation of vessels traveling the ocean. Nautical charts list various heights for the area the chart covers relative to a chart datum. Chart (or tidal) datums are reference points for geographical regions to define the height of the ocean’s surface at a particular date and time.
​
Looking at an example chart, 3538 of the Canadian Hydrographic Service (CHS) BSB Navigational Charts, which covers the Desolation Sound area, it looks like a typical nautical chart where the land is depicted in a tan colour and the deeper areas are coloured white while the shallower areas are coloured deeper shades of blue the shallower the water gets.
Picture
CHS RNC Chart 3538 – Desolation Sound
​In addition to the pictorial view of land and water depth, nautical charts always list their datum information.
Picture
CHS RNC Chart 3538 – Chart Datum Information
​As we can see from the chart datum information, height references like HHW ([Mean] Higher High Water) and LLW ([Mean] Lower Low Water) to name a couple are shown. These height references are important to determine exactly where the ocean surface is located vertically.
Picture
NOAA’s representation of tidal datums
https://tidesandcurrents.noaa.gov/datum_options.html

A chart datum height reference of Mean Lower Low Water is often used to provide a conservative view for mariners so they can rely on the water level not going below this reference point.

For the purpose of rendering the ocean surface without simulating the tides, we can use Mean High Water. This will provide a view of the land-ocean interface where the underwater features, e.g., seaweed, crabs, etc. are generally not exposed. I consider simulating low tide, with all the exposed features that were underwater hours before, a future consideration for the lifelike virtual world.
​
Let’s use an example to calculate the ECEF coordinates of the ocean surface in meters relative to the WGS84 ellipsoid. Using the same chart as above, let’s zoom into a particular region, Hernando Island.
Picture
Stag Bay RNC Chart Example
The numbers shown in the image above are “soundings,” measurements hydrographers take to determine the depth at a particular location. As we saw in the chart datum information, the soundings are in meters.

Let’s use the sounding to the immediate right of the STAG BAY label, ‘40’, and compute the coordinates of the ocean surface at that point.
​
The 40-meter sounding is the height at the Lowest Normal Tide. To convert to Mean High Water as I suggested to do, we look back to the chart. We can see the closest tidal station to Stag Bay is Lund. The tidal information on the chart only has Mean Higher High Water so we’ll use that value from the corresponding column in the Lund row, which is 4.8 meters. Now we have a Mean Higher High Water depth of 40 + 4.8 = 44.8 meters. This is referenced to the Lund tidal station. But how do we know where this station sits relative to the WGS84 ellipsoid? For that, we need to get information about the tide station from an external resource. These are typically managed by the government and provided via a website. Below we see the information about the Lund tide station.
Picture
Lund Tidal Station Data
https://www.tides.gc.ca/en/stations/07885/benchmark/63a397a8424d9e40dd9a7f5e

We see there are various vertical data codes listed. These different data references vary in purpose and accuracy. For our purpose, we will use the NAD83_CSRS datum, which shows an elevation of -8.016 meters. To simplify the process in this blog post, we will use this value directly as the height relative to WGS84, however, this isn’t quite correct as the Earth’s tectonic plates are constantly moving. For the super keen, have a look at https://natural-resources.canada.ca/maps-tools-and-publications/geodetic-reference-systems/canadian-spatial-reference-system-csrs/9052 to get a better understanding of the details of the NAD83_CSRS datum. Adding the elevation of the Lund tide station, we have a height of 44.8 + (-8.016) = 36.78 meters. Now that we have the elevation, we need the horizontal geographic coordinates.

Again, looking at the chart information, the chart uses the Mercator Projection based on NAD83. We must project the Mercator coordinate back to geographic.

Note if the chart was referenced to a different ellipsoid other than WGS84, we would need to perform a datum transformation (or datum shift) as well.

Using Global Mapper, a GIS software application that can read RNC charts, the location of the 40-meter sounding has Mercator coordinates of X= -13905279.486 and Y = 6413303.196. I talked about map projections in my Earth Coordinate Systems blog post but I will just skip straight to the answer here. The geographic coordinates for that point are 49° 59' 55.3849" N, 124° 54' 47.7033" W.

So now we have a geodetic point for that location:

   Lat: 49.99871802
   Lon: -124.91325092
   Elevation: 36.78 meters
 
Converting from geodetic to ECEF using the formula from my previous blog post, we get:
​
   X: -2351.153 km
   Y: -3368.638 km
   Z: 4862.726 km
 
We can now position that point in our WGS84 ECEF world coordinate system. Next we need to scale the solution.

If we step back and look at the chart as a whole, we can see that it could be converted to a heightmap of sorts. We could use a best-fitting, 3D surface model and create a heightmap from each chart file. This is exactly what Bending Time intends to do. If the charts end up being too large on their own, they will be tiled for more efficient download and runtime processing.

Circling back, it’s clear that we need nautical charts when we are close to the planet’s surface. However, these very specific ocean heights will likely not make a difference to an observer flying in a virtual plane at 37,000 ft for example. This is where the geoid comes in.

We use the geoid heights for planet-scale ocean heights and nautical charts for when we are close to the planet’s surface. The geoid also serves as a way for the simulation to calculate AMSL heights, which could be used in GPS or aircraft simulation.
 
Wave Geometry

Okay, now that we can model the correct height of the ocean, we need to add waves when we are near the surface.

One of the main reasons Bending Time chose to use the Local ENU coordinate frame when the observer is close to the surface is for ocean rendering. By using ENU coordinates, we can emulate a flat service and get round-Earth coordinates at the same time. This lets us use the ENU z-axis as “up” for the ocean. This will be exactly true at the ENU origin but the error in the up direction increases the farther away from the origin the observer goes.
Picture
ENU coordinate system
Looking back to Finch’s “Effective Water Simulation from Physical Models,” we now implement the Gerstner waves as the first step. I use three waves with additive properties resulting in the wave geometry being calculated in an ocean shader in the ENU frame. This is where I left off.

One of the challenges I was facing was, even at the ENU surface level, the scale of the scene is very large, around 100 km x 100 km. I will circle back to ocean rendering after I complete the enhanced terrain shading work.

An alternative to the Gerstner approach is to use a Fast Fourier Transform (FFT) to simulate waves. There are several solutions I’ve seen that do this with fantastic results. However, the planetary-scale issues remain regardless of the approach you take to wave geometry.
​
Further, I highly recommend Jasper Flick’s Catlike Coding website for easy-to-understand rendering techniques. Specifically, his Waves article is a very nice introduction to wave geometry.
Picture
Catlike Coding Waves
https://catlikecoding.com/unity/tutorials/flow/waves/

Let’s move onto ocean shading and lighting.
 
Ocean Shading

As most developers do, I approached the shading of the ocean using the simplest setup possible: a simple sine wave. Better to get the basic shading in place before adding advanced effects. Moving vertices using a sine wave is fairly straightforward once you have the vertex in the ENU frame. The next step is deciding what colour the pixel should take. I’ve seen several ocean simulations use textures as part of determining pixel colour, which has the bonus of being able to add effects like foam/wash. However, I didn’t want to use a texture at the beginning as I figured I should be able to shade the surface using just the light direction vector, colour and intensity. At the time, I had heard about Physically-Based Rendering (PBR) so I thought I should at least learn a little more. Well, I fell down the PBR rabbit hole. So much so, I developed a standalone PBR test tool in Unity, which I will publish in my next blog post.

Bending Time models the Sun position accurately based on the day and time and maintains a light direction vector as well as an intensity value. The light colour is dependent on the atmosphere, which is the next blog post after the PBR test tool. To cast the sunlight on the ocean, the ECEF sunlight direction vector is transformed to our ENU frame and passed to the ocean shader. From here we calculate the usual PBR vectors and apply ambient, diffuse and specular properties. The result was “meh.” At this point, I turned my attention to the colour of the sea.
​
As someone who is very familiar with NASA’s products, I started at their Ocean Color page at https://oceancolor.gsfc.nasa.gov/. I was surprised to see the effort on NASA’s part in tracking the phytoplankton biomasses across the planet. Turns out the chlorophyll concentration in the plankton is a major contributor to the colour of the ocean’s surface because the plankton live at the surface to collect the sun’s light for their photosynthesis process.
Picture
NASA’s “Chlorophyll a” global data product
This data, like most other remotely sensed data, is quite large hence it is tiled for easier download and processing. Bending Time loads this data in and the texels from the chlorophyll image are provided to the ocean shader. This is then summed with a “base colour” which is just hard-coded for now. Admittedly, the results were far from perfect and the whole ocean rendering work was left in this halfway state. I look forward to getting back to this work as it’s challenging and the end “product,” an animated ocean rendered on a true-scale 3D world, will be quite novel.
 
Future

There are many more sub-topics when rendering the ocean:
​
  • Wave refraction
  • Shoaling/surf
  • Shoreline/beaches
  • Tides
  • Wind effects
  • Whitecaps
  • Joint North Sea Wave Observation Project (JONSWAP)
  • Sea foam
  • Boat wash
  • Underwater
  • Ocean bottom
  • Depth/transparency
  • God rays
  • Marine life
  • Coral reefs
 
Developing the overall solution for planetary ocean rendering has already been expensive and I’m not even done yet! So, the additional features above must be tied to business if there’s a hope of funding their development. Having said that, the first three items, essentially ocean waves coming to shore, make a big difference to the visuals especially the first one. You don’t want your waves coming to shore on an off-kilter angle because it would look obviously ​unnatural to the observer.
​
I investigated wave refraction and the concept is fairly simple: waves refract as they enter shallower water. The folks at the University of Hawaii have put together an excellent resource all about the ocean called Exploring Our Fluid Earth, which can be found at https://manoa.hawaii.edu/exploringourfluidearth/, which includes a page about wave refraction among a plethora of other pages about the ocean.

Picture
Ocean Wave Refraction
https://manoa.hawaii.edu/exploringourfluidearth/physical/coastal-interactions/wave-coast-interactions

While the concept is easy to understand, implementing ocean wave refraction and surf is difficult especially on a planetary scale.
​
I leave this topic and the others listed above to a future where Bending Time is making money on ocean-related visualizations/simulations.
 
Conclusion

This blog post ended up being quite a bit more involved than I had originally anticipated so let me try and synthesize it down to a few salient points:
​
  • A global geoid defines the height of the ocean surface at planetary scales
  • Nautical charts are needed to understand the height of the ocean at local scales
  • Generating the wave geometry has been solved in XYZ and we need to be creative to apply these solutions to a round Earth
  • Shading the ocean needs to consider things like chlorophyll concentrations to accurately depict colour
  • Funding realistic planetary ocean rendering must be tied to business
 
I hope you were able to make it this far. Personally, I find the topic of planetary ocean rendering fascinating and I really look forward to getting back to work on it.
 
Explorarent maris.
 
--Sean
0 Comments

Terrain Rendering

6/9/2024

0 Comments

 
One of the major topics of developing an open world game is the terrain. It is the foundation on which all the beautiful details of the world are placed. In a streaming 3D globe, the terrain data is not fixed; raw terrain is streamed and loaded on-the-fly. With the exception of 3D Tiles, which I'll talk about later. This fundamentally changes the approach and makes the development of the 3D Earth unlike that of a typical game.
​
The general steps for rendering terrain (without vegetation) are:

  1. Load the terrain data from disk
  2. Arrange the data in memory
  3. Push the data to the GPU
  4. Generate geometry and refine
  5. Push the data into the rendering pipeline
  6. Geo morph in the vector shader
  7. Render using Physically Based Rendering (PBR) in the pixel shader

Loading Terrain Data From Disk

Terrain data (or digital elevation data as its known in the geomatics world)  is used across many different industries and comes in a variety of formats. Each format has its own pros and cons for the given purpose.
  • DEM
  • heightmap
  • quantized mesh

Digital Elevation Models (DEMs) are the simplest format of elevation data. NASA developed the elevation data sets from the Shuttle Radar Topography Mission (SRTM) in 2000 that still have tremendous value today. The file extension that the NASA data managers use is a simple binary raster format in a standardized grid encoded in a file with extension HGT. This data can be used directly in a game environment but custom code is required to load it and push it to the GPU. Once the data is on the GPU, it’s up to the rendering pipeline to render the terrain on the screen. However, it’s more typical to see heightmaps in games.

A heightmap is a raster grid of heights relative to some height datum. In the world of geomatics, the reference height datum is typically an ellipsoid the approximate shape of Earth. When rendering, the height is added/subtracted from the ellipsoid height to get the final height of the terrain at that particular XYZ coordinate.
Picture
Example Heightmap
​One can think of rendering a heightmap as extruding the height from the flat map. This has a drawback in that you can’t model caves, holes or other interesting terrain phenomenon. However, for the majority of cases, heightmaps work just fine and developers can use other techniques to model these features (e.g., clipping a volume of the terrain and inserting your own 3D model in its place).

Heightmaps are often encoded as images, though as mentioned, they don’t have to be as we saw with DEMs. When encoded as an image, a heightmap’s colours must be decoded to an actual height. There are two main methods for this encoding scheme:
  • RGB
  • Grayscale
​
RGB heightmaps typically encode the data in a weighted fashion where the major height distance is encoded in the red channel, the medium height distance is encoded in the green channel and the fine height distance is represented in the blue channel. A formula to decode RGB heightmaps is typically provided by the data supplier. For example, Mapbox’s RGB heightmaps use an encoding/decoding formula, which can be found at https://docs.mapbox.com/data/tilesets/reference/mapbox-terrain-rgb-v1/.
Picture
RGB heightmap
​Grayscale heightmaps are similar to RGB but they effectively only use one channel, which is usually encoded using 8 or 16 bits. The heightmap designer trades off height resolution with total size of the file depending on their needs.
Picture
Grayscale Heightmap
​Another downside to heightmaps is they are raster. I.e., they are represented in a grid format where every row/column has a height value. Well, if you have a very flat section of terrain, the heightmap will contain redundant information. This is where Triangulated Irregular Networks (TINs) come in. TINs only contain data where there are actual changes as described at https://gistbok.ucgis.org/bok-topics/triangular-irregular-network-tin-models.
Picture
Triangulated Irregular Network (TIN)
​Cesium uses TINs in their quantized mesh format. The data is encoded using ints. This format of terrain also supports the encoding of normals in line with their modus operandi to encode data such that it is ready for rendering.
​
Another popular format of terrain data is point clouds, in particular Light Detection and Ranging (LiDAR). A point cloud is just like it sounds, a bunch of 3D points floating around in air. Once these points are processed, they are translated into a geodetic 3D point and can even be encoded using heightmaps (though they often stay as point clouds). When they are encoded as heightmaps, LiDAR data is often encoded in a format that holds more resolution than an image heightmap, e.g., grid float. Grid float files are similar to digital elevation models but with a finer resolution because 32 bit floats are used to encode the height value instead of an 8 or 16 bit integer.
Picture
LiDAR Point Cloud
One could imagine using LiDAR data in a game could be really cool because you get a super high-res scan of the real world. If you add an optical camera to this environment and you texture the object then, absolutely, that scanned object (with a little pre-processing) can be used in a game engine environment. Somewhat obvious though, LiDAR data is quite large so you are sacrificing space on disk and loading time for the sake of high resolution.

LiDAR is great for capturing key objects in the world whose original form should be preserved in electronic format. However, for generic features like a forest on a mountainside, it doesn’t make sense esp. if the forest is only ever seen at a distance.

Imagery

Normally at this point we would talk about texturing the terrain. But if we take a step back to 3D globes, this is usually discussed as imagery. When satellite imagery first became available in the 1960s, it revolutionized the way we look at our planet. And then when Google Earth hit the streets in 2005 and put satellite imagery at the fingertips of the average Joe, it was mind blowing. Nowadays we refer to this imagery as “aerial” because a lot of the higher resolution data is collected by planes flying tandem LiDAR-optical camera sensor systems often over cities.
​
Once the data is collected, it must be processed as I touched on in my It’s Your Planet blog post. This includes reprojecting the data into a map projection suitable to the end use. In Google’s case, they popularized the use of the Web Mercator projection because it represents the imagery in linear units as well as being quick and easy to render. Due to their widespread availability, Web Mercator imagery and maps can be useful to render in a 3D globe. The trick is globe coordinates are geodetic by nature so the Web Mercator images must be “warped” to fit the 3D terrain tile. For simple broad-area views from space, it is fine to do the reprojection from Web Mercator to geographic on the GPU in a shader. However, for more detailed scenes at the surface, Web Mercator will contain visible oddities so is not appropriate. To achieve the reprojection, UVs are calculated at each ECEF XYZ vertex of a “template” tile mesh.
Picture
256x256 Tile Mesh with Web Mercator Image Warped Onto It
The [default] rendering pipeline will then render the mesh using these UVs.
​
Having said all that, if we always plan to render our world in 3D, there’s no sense processing our data in Web Mercator. It would be much more effective to simply encode the imagery in geographic coordinates, which is what Bending Time does.
​
One special case to note is the polar regions of Earth. Geographic coordinates are not very useful in the polar regions. As mentioned in my It’s Your Planet blog post, Bending Time may end up using polar stereographic coordinates for the polar regions. For now, let’s ignore the poles.

Features

At risk of diverging from the subject matter of terrain, I want to talk about feature data briefly. In a 3D globe application, it is very typical to load and render map data in layers starting with terrain, then draping on imagery followed by rendering vector features in order of polygons, polylines and points using a styling of choice. A feature in this case could be a lake (polygon), road (polyline) or the location of a manhole cover in a city (point).
Picture
Vector Tile Features
There are a variety of formats for feature files. A GIS standard that has seen heavy usage for decades in the ArcGIS community is the shapefile. More recently, Google’s protocol buffers are very efficient and are used as part of Mapbox’s vector tile format.

The reason for bringing vector features into the discussion is they are a staple of 3D globes but they can also be very useful for rendering things on top of our terrain like streets for example. Imagine loading a vector tile containing a country road. The data is loaded, sent to the graphics card and then with some heuristics (or actual data depending on the source) a 3D road is rendered on top of terrain that has been automatically flattened to avoid z-fighting. I will talk about this more in future blog posts.

3D Tiles

An emerging file format is the 3D Tiles specification championed by Patrick Cozzi and the folks at Cesium. In layman terms, 3D Tiles essentially contain the terrain data, imagery and features (3D objects encoded in glTF format in this case) all in one tile. Their raison d’être is to contain everything you need to render the data in a 3D engine quickly without any preprocessing steps. 3D tiles are quite effective but Bending Time will not use them for the relative near term. Let me explain.

To build 3D tiles for a virtual Earth, you need to acquire all the data that is required for your virtual world. This includes terrain data, LiDAR scans of the surface and imagery, which is then packaged up and copied to the cloud for streaming by 3D virtual world clients. Now imagine you want to change something in one of the tiles or worse, you have a new collection that you want brought in. The single tile or the entire set must be re-generated and re-deployed to the cloud. This is time consuming and expensive, and we’re still early in the whole 3D-model-of-the-Earth-down-to-the-sidewalk-level epic journey.

This is not just a casual observation. This is a painful lesson I learned during the CP 140 Aurora trainer simulator program. The overall training simulator was required to train all the surveillance operator roles onboard the Aurora aircraft. Instead of developing redundant sets of terrain data for each sensor simulator, one “terrain database” was developed. The database was generated by taking input data in the format of DEMs, imagery and vector features and then terrain generation software was used to procedurally generate the 3D data from the 2D vector features. E.g., a 3D road would be generated from the 2D road data. TerraTools is an application from the military simulation world to do this work. The problem arose trying to deal with changes. As you could imagine, 20 or so years ago the process of generating a terrain database took a long time to complete. I believe it started out taking about a week to generate and then was “optimized” to 2 days. When changes would come in (whether from the customer, source map data or whatever), the whole database had to be re-generated. You can try to minimize the effects but the kicker is this problem has a snowball effect. The more you add to it, the bigger it grows but also the faster the growth rate. For the Aurora program, this had significant cost and schedule effects. It was a lesson that stuck with me.

I call the process of encoding features into the final 3D tiles “baking.” The source data is baked into the final product. One alternative to baking is try to do the procedural generation work at runtime. This is not an easy task made obvious by the fact it used to take up to a week to generate a terrain database 20 years ago! However, proc gen at runtime has the benefit of only working on the data that is loaded, which is a very small subset of the total data available for the world. In addition, technology is so advanced today that the latest powerful GPUs can do a huge amount of processing in real-time.
​
For Bending Time, early on we did some experimenting on procedurally generating buildings and roads, which is an area I hope to continue once the base, natural world is “complete.”

Circling back to 3D Tiles, I’m not saying they’re no good. It’s more the opposite, they are an excellent step towards an open 3D Earth. The point is to avoid “going all in” on 3D Tiles at such an early stage of a multi-decade effort.

Arrange Data in Main Memory

Once tiles are loaded in memory, they must be organized for quick and easy retrieval. A common spatial index is the quadtree, where each level of tiles is divided into four quadrants. This works well for tiles in geographic coordinates esp. if you view the typical map of Earth where the x-axis represents the longitude and the y-axis represents the latitude (notwithstanding the irregularity at the poles).
Picture
Quadtree Spatial Structure
​One downside to the quadtree is it doesn’t balance the tree based on its contents. I.e., if the loaded data is all in one particular region of the world, the tree will be very dense for that region but sparse in other areas. This causes an unbalanced tree, which can result in slower data retrieval. This is where R-trees come in. They organize the data in rectangles based on the density of data per region. The Wikipedia page at https://en.wikipedia.org/wiki/R-tree describes R-trees clearly and concisely.
Picture
R-tree Spatial Structure
The downside, at least in terms of a virtual world, is the rectangles in play at runtime can change based on the data that is loaded. Some algorithms (e.g., terrain geo morphing) rely on the grid structure being known ahead of time.

In a 3D virtual world, if retrieving a tile takes 500 microseconds instead of 200, it’s not a big deal. To avoid dropping frames at render time, tile rendering may be spread across many frames. In this case, the extra microseconds can be easily handled.

Now that we can easily identify tiles, we need to figure out when to load new tiles and when to drop old tiles; the key function of a 3D globe.

An easy way to conceptualize the process is the virtual world app maintains the World around the Observer regardless of scale. So if the Observer is on the surface of the Earth, the app loads fine data in the immediate vicinity around the Observer, and then increasingly coarser data the farther away from the Observer you go. As long as what the Observer looks at appears like it would in real life. Nothing is pixelated.

In order to preserve memory, tiles that are no longer needed for the current Observer view must be removed from main memory. In a nutshell, once a tile is far enough away from the Observer such that it no longer contributes to their view, it is removed from memory. The 3D Engine Design for Virtual Globes book refers to this as screen-space error. However, I prefer to think of it as an Observer’s view resolution. I.e., the finest resolution object an Observer can see at their current location. If a tile’s resolution is smaller than the view resolution, it is removed from memory.
 
Push Data to GPU

​Now that we are loading and removing tiles in main memory, we need to get the data to the graphics card for rendering.
​
A typical flow for getting geometry from main memory to video memory is to make a draw call with a buffer as described at https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Object.
Picture
OpenGL’s Shader Storage Buffer Object (SSBO)
In Unity, this is done using the ComputeBuffer object passed to a call to a compute shader’s SetBuffer() function. The whole usage of compute shaders in Unity is described well at https://catlikecoding.com/unity/tutorials/basics/compute-shaders/.

Compute Shader for Geometry and Refinement

​Once the terrain data is loaded onto the GPU, we execute a compute shader to refine the terrain data to fill in the gaps between what the Observer sees and the original resolution of the DEM tile. Brano Kemen of Outerra describes this process quite well at https://outerra.blogspot.com/2009/02/procedural-terrain-algorithm.html.

This is equivalent to the Tessellation render stage however, this stage typically applies a smoothing algorithm whereas we generally want the terrain to look rough, i.e., rocky, depending on the landform being rendered.

Bending Time has implemented compute shaders to receive the terrain data and output the corresponding mesh buffers, which are then read in latter stages of the rendering pipeline. The fractal refinement has been experimented with but a final GPU-based solution has not been implemented yet. This will be one of the first tasks when I get back to development.

There are alternatives to using Compute shaders. Bending Time originally experimented with Geometry shaders to generate the terrain mesh. This was achievable and looked promising however, since then, the increasing flexibility of Compute shaders and buffers makes them the clear choice to transform terrain data from disk into mesh data on the GPU.
 
Push Data Through Pipeline

Pushing the data through the pipeline using Unity is as simple as creating a MeshRenderer with a shader that reads the compute buffers from before. Unity executes the shader, which normally contains vertex and fragment sections to output the vertex position and pixel colour respectively.

As the terrain rendering in Bending Time gets more advanced, shaders at different stages will be investigated.
 
Geo Morph in the Vertex Shader

In a 3D virtual globe application, tiles are loaded and dropped all the time as the user moves around the world. A well-known issue with this is the terrain can “pop” into or out of the scene, which can be somewhat abrasive to the Observer. To ease these transitions, a technique called geo-morphing was created.

Bending Time has not implemented geo-morphing yet but has investigated the solution space and determined the vertex shader is the best place to implement this transition. The Continuous Distance-Dependent Level of Detail paper found at https://hhoppe.com/svdlod.pdf describes the approach.
 
Physically-Based Rendering

One of the last stages of the render pipeline is the pixel shader. As described in Microsoft’s Direct3d documentation, the pixel-shader stage (PS) enables rich shading techniques such as per-pixel lighting and post-processing. The most common approach these days is to use Physically-Based Rendering (PBR) shading techniques.

Bending Time implemented a PBR pipeline for its 3D ocean but this has not been applied to terrain yet. However, I’ll spend the remainder of this last section describing the planned approach.

One of the starting points for rendering an object using PBR is the albedo. From Wikipedia, albedo is the fraction of sunlight that is diffusely reflected by a body. In games, the albedo texture is often just the base colours excluding normals, etc. However, NASA measures albedo on the Earth’s surface as just the amount of light being reflected, not its colour. For our purposes here, I will think of the albedo for Bending Time’s terrain incorporating both the colour and [base] intensity. Maybe this is how everyone thinks of it, I dunno.
​
The obvious choice for albedo is from the aerial imagery. This is a good starting point but I suspect the solution for albedo will involve more steps.

One of those steps could be the incorporation of land use imagery, also known as land cover classification.
Picture
Land Cover Classification Imagery
Looking at the image above one might consider using the LCC image by looking up a texture based on the pixel classification. E.g., red is urban, green is forests. And this idea has merit but it’s only part of the story. It would seem a combination of the optical satellite imagery and the LCC imagery is the best approach. One of the considerations is the typical resolution of each type of data. Aerial [optical] imagery taken from a plane is often down to 1m resolution. Whereas LCC imagery, most often derived from satellite imagery, is lower resolution. Bending Time plans to primarily use the optical imagery and use the land cover imagery to “correct” any pixels (actually materials). For example, if there is a green building top, the pixel shader might interpret the material as forest and render it accordingly. The land cover classification would say it’s urban and could correct the reflectance in this case. This will need to be experimented with and surely there will be an ongoing effort to improve and tune the albedo portion of the shading.
​
There are other factors to the albedo as well, which are seasonal changes. Specifically, snow and ice can come and go throughout the seasons depending on the environment/biome of the region. For mountain views, the snow level can be calculated based on altitude and average temperature of the region. This could enable automatic identification and implementation of snow that would be rendered with a snow material. This is just another example of the plethora of solutions/implementations for procedural material identification and rendering.

The next aspect of PBR shading is the normal vectors. These will be computed in the vertex shader on the post-fractal-refined terrain so the time that we’re in the pixel shader, the normal vectors will be available.

Another aspect of PBR rendering is the “metallicness” of the material. Metal has unique properties when it comes to reflecting light and the effect is often implemented in a PBR solution. I won’t talk about the metallic component at this time because I’m not sure how it will come into play for the natural features on our planet.

Now we have the material properties of the object, we need to discuss the light. In Bending Time, the true Sun position is calculated and the sunlight vector is passed to the terrain shaders. The Sun and Moon actually. In addition to the direction, the sunlight colour is calculated using the atmospheric rendering shaders. I’ll discuss atmospheric rendering in a future blog post. From here, we are ready to render the final pixels on the screen.

In Bending Time, this involves the summation of the main types of reflectance, which include:
  • Ambient
  • Specular
  • Diffuse

This is a simplistic PBR solution that is currently only implemented in BT’s ocean rendering solution, which is still not complete. However, the core pieces are all there and I will look at implementing fractal refinement and PBR terrain rendering as one of my first tasks when I return to development of Bending Time’s lifelike virtual world.

Terra firma.

--Sean
0 Comments

Earth Coordinate Systems

5/19/2024

2 Comments

 
I suspect building a lifelike virtual world (i.e., Google Earth comes to life like a video game), is an attractive idea for a lot of software developers. However, most of them don’t know where to start. It’s a huge undertaking. Like me, many of you bought the 3D Engine Design for Virtual Globes book written by Patrick Cozzi and Kevin Ring. It is an excellent book that I recommend. And as most of you know, Patrick went on to create Cesium, the JavaScript-powered 3D globe, which is also excellent. Patrick and his team have been doing great work and are pioneers in the “Open Geospatial Metaverse” space. However, there are things that Cesium (like Google Earth) doesn’t do that keeps me pushing forward on my own virtual world. For example, a lifelike 3D ocean. Back to the book, I won’t repeat content that is written there as it covers a variety of topics very well. Instead, I will provide more practical descriptions for true-scale planet rendering, as well as describing areas the book does not cover.
 
Having said the above, I am no expert in true-scale planetary rendering. That’s because there are no experts! One can really just talk about their own experience. There have been people before me like Ben Discoe who developed and still maintains the Virtual Terrain Project. And I would be remiss if I did not mention Brano Kemen, the genius behind Outerra. As for myself, besides the passion described in my Hello Again, World blog post, probably my most significant (and relevant) accomplishment was building a radar simulator for the Canadian Air Force. I designed the simulator such that it would dynamically load terrain data so it could go anywhere in the world. It was delivered to the Department of National Defence (DND) and is still used today to train radar operators working on the CP 140 Aurora maritime patrol aircraft. In this project and others, my background is more on the geospatial side of the house, not games per se. It has been and continues to be an uphill journey learning the ins and outs of game development. I hired several people from the game development community but turns out planet-scale rendering is hard, even for a seasoned game dev. If you are someone who thinks they can help me out, or you know of someone, give me a shout. Okay, enough of the background chatter, let’s get to the good stuff.
 
Coordinate Systems
​

One of the first things you need to do when setting up a game development environment is determine where the world origin is. For Earth, we start with the Earth-Centered, Earth-Fixed (ECEF) coordinate system.
Picture
ECEF Figure from Wikipedia
​The ECEF coordinate system is centered on Earth’s center (of mass that is) with the X axis pointing off the coast of Africa, the Y axis in the Indian ocean and the Z axis at the [true] North pole. The latitude and longitude are shown as the φ and λ angles respectively. The point represented by the outer corner of the green box has XYZ coordinates in this frame.
​
From the figure you can see the surface normal at the example point doesn’t extend down through the center of Earth. This is due to the oblate spheroid shape of the Earth and results in φ angles referred to as geodetic latitudes. The line extending from the surface point through to the center of Earth yields a φ angle known as the geocentric latitude. For practical purposes, just use geodetic latitudes as they are the most common. The λ angle is the longitude and together, they are commonly referred to as geographic coordinates. If you add in the height above or below the reference ellipsoid, the trio of numbers is often referred to as geodetic coordinates. To convert geographic/geodetic coordinates to ECEF, refer to the Geographic coordinate conversion page on Wikipedia.
Picture
Geodetic coordinate conversion from Wikipedia
​To convert back from ECEF to geodetic, I use the iterative method described in the paper “Transformation from Cartesian to Geodetic Coordinates Accelerated by Halley’s Method” by T. Fukushima. It seems the paper is now behind paywalls so I have snipped the XYZToLatLon() function from Bending Time and made it available here to download. Note there may be newer methods than this one from Fukushima in 2006 but generally, the reverse process from XYZ to Lat/Lon/Height is an iterative one.
 
Other useful calculations in a 3D globe are ellipsoid distance and bearing/range. For these, I use the Vincenty direct and inverse formulae, which are described in his original paper at https://www.ngs.noaa.gov/PUBS_LIB/inverse.pdf. You can find implementations of his formulae all over the web.

An even more accurate solution for ellipsoid distance and bearing/range was developed by Charles Karney who developed the GeographicLib project, which can be found at https://geographiclib.sourceforge.io/.
 
Once you are armed with these geodetic, ECEF and ellipsoid functions, you have the basis for a 1-cm-precise 3D virtual world. I say precise and not accurate because the calculations may be accurate but the data you use may not be. Which leads us back to maps.
 
Map Projections
​

Most municipalities around the world map out their region on Earth and maintain a Geographic Information System (GIS) to manage and use the data. They hire surveyors to map out the city, take measurements and upload the data to the GIS. As part of their mapping process, they use GPS antennae, which gives them very precise geodetic coordinates however, the geo format is not easy to understand. It doesn’t directly translate into how many meters long a sidewalk is for example. So, they typically save the data in map-projected coordinates like those from the Universal Transverse Mercator (UTM) grid system. This gives them and the people who read the maps a nice view where all the pixels are square in linear units like meters or feet.
Picture
Universal Transverse Mercator (UTM) Grid System
If you’re interested in learning more about map projections, I recommend the bible in the domain “Map Projections: A working manual” by John Snyder, which can be found at https://pubs.usgs.gov/publication/pp1395. I bought a bound version online as hard copies make referencing things so much easier imo.

The reason I’m circling back to maps is they are the major source of data for a 3D virtual Earth. As I talk about in my It’s Your Planet blog post, the open data trend continues and many cities offer their data for free with a commercial-friendly open license, albeit often in map-projected coordinates.
​
Given the square and equidistant pixels in map-projected coordinates, one might think this would be a suitable coordinate system to use when a user is playing on the ground in the lifelike virtual world. And it’s true, this would work but only for small regions at a time. If the user moves too far away from the map projection area, the math breaks down and you have to move your region to another zone, or use another projection or what have you. In other words, the world is no longer seamless and transitioning from one zone to another could be problematic.
 
Having said that, one interesting idea I had was to use the Universal Polar Stereographic (UPS) coordinates for the polar regions, esp. Antarctica. A lot of data that is collected near the poles are encoded in polar stereographic coordinates and so having the world use the same map projection as the data would alleviate a lot of map conversion work. I thought a gameable environment in Antarctica would be unique and could be pretty fun however the cost/risk/benefit ratios were not adding up so this idea never materialzed. (Though I still have copious data for Antarctica, mostly in polar stereographic coordinates.)
Picture
Universal Polar Stereographic (UPS) Grid System
​A good reference for both UTM and UPS (aka the Universal Grids) can be found at https://apps.dtic.mil/sti/tr/pdf/ADA323733.pdf.
 
The one other thing that you need to watch out for when using data from different sources is the geodetic datum that is used in the map projection calculations. That is the shape of the ellipsoid at a point in time. When building a true-scale 3D Earth, your data must all align otherwise terrestrial features may look odd or out of place. The common solution is to transform all your data to the World Geodetic System 1984 geodetic datum, however, there’s a lot of other factors that go beyond the scope of this blog post so I will leave it at that; just use WGS84.
 
Local Tangent Plane Coordinate Systems
​

Another coordinate system to be aware of, which are used in aerospace applications all the time, are the local tangent plane coordinate systems; specifically, the East-North-Up (ENU) and North-East-Down (NED) coordinate frames. The ENU frame is useful to model objects on the surface while the NED frame is often used for objects above the surface looking down (e.g., aircraft).
​
One of the best (and easy to understand) references for the ENU and NED frames that I found is “A Pseudo-Reversing Theorem for Rotation and its Application to Orientation Theory” by Don Koks, which can be downloaded from apps.dtic.mil/sti/citations/ADA561412. In particular, the figure he drew to explain the rotations needed to produce a NED vector from geographic coordinates is clear and easy to understand.
Picture
NED Rotations from “A Pseudo-Reversing Theorem for Rotation and its Application to Orientation Theory” by Don Koks
It's the same rotations for ENU just mind your axes. One can imagine, a local tangent plane modeled in square meters AND accounting for the curvature of the Earth could be pretty useful and I can tell you that it is! I discuss this a little more below.
 
Depth Buffer
​

One of the topics covered by the virtual globe book is the depth buffer. At large scales, your rendering software will suffer from depth buffer precision/resolution. The result is erroneous depth occluding (e.g., z-fighting). As mentioned in the book, Outerra uses a logarithmic depth buffer. The authors discuss another solution which is to use multiple frustums. Because Bending Time uses Unity, it was simple to add a second “close” camera to achieve multiple frustums.
 
Coordinate Precision

Another area of concern for planet-scale rendering is object jitter. I.e., an object begins to shake when its coordinates get too large. Read Chapter 5 in the book for more info. This is still one of the main problems with planetary-scale rendering. The virtual globe book provides three solutions:

  1. Render relative to center
  2. Render relative to eye
  3. Emulate 64 bits on the GPU using two 32 bit floats

Early on in Bending Time, I explored the solution space for #1. In a nutshell, every game object has coordinates relative to something else, a parent. It makes sense algorithmically however, once I got into the coding, working with the coordinates was cumbersome. It was difficult to know where an object was located just by looking at its coordinates. And my mental model was blurred.

Render relative to eye has been a longstanding concept stemming back to the original OpenGL days. Instead of rotating the camera around the world, rotate the world around the camera. In recent times, Dr. Chris Thorne at https://floatingorigin.com/ seems to be the main proponent of this approach. I didn’t get very far down this path before turning around because so much of my work was in the pre-render stage. Pre-runtime stage even! Managing all the map data, re-projecting and transforming, etc. Then in the code, streaming the terrain data based on the observer’s position and perspective, loading the data onto the GPU and then finally rendering, it didn’t make sense to try and change my coordinate systems at that point. However, the core idea has merit and may be worth exploring in some scenarios.

That led me to solution #3, emulating a 64 bit coordinate. I had done some research and learned that there was a significant performance hit so stopped there. However, that was almost 8 years ago and I suspect optimizations have continued like they usually do. For example, I came across an idea from a tweet from Sebastian Aaltonen where he describes using three 32 bit integers to emulate 64 bit planet-scale coordinates. Integers are full rate as he says so there wouldn’t be a performance loss per se, except you’d need 3x more memory. Worth exploring for sure.
​
Where I ended up was dynamically switching to a local ENU plane as the observer reached a certain height above ground. The coordinates in Bending Time are managed in Geographic/ECEF pairs from loading data to pushing the data to the GPU. The transformation to the ENU frame is done on the GPU and uses trigonometry around the lat/lon angles so the hope is object jitter will be reduced. (Lat/lon angles only need 6 decimals for approximately 1 cm precision.) Plus, the coordinates all go through the same pipe before going to the GPU in Bending Time so, if it's determined that a translation from Geo/ECEF to something else is needed, it can be done in that pipe.
 
Once I complete the “here’s where I am now” blog series, I will dust off my development environment and explore this topic further with example videos.
 
Locus on terra.
 
--Sean
2 Comments

It's Your Planet

5/13/2024

0 Comments

 
​You may have noticed the tag line on the landing page on the Bending Time web-site and wondered what it means. Or at least thought, is this guy for real? Well, someone typed this text!
Picture
It goes back to my ethos for the company: build a lifelike virtual world based on open data and make it free for everyone to access. Revenue would come from the monetization of goods and services provided in the world. Another way to say it, the lifelike virtual world Bending Time is building is, in spirit, owned by the people. All the people who manage and contribute to the OpenStreetMap project, the folks working at a nation’s federal level acquiring and publishing national datasets with an open license, GIS technicians at cities, surveyors, imaging satellite engineers, pilots flying optical and LiDAR sensors, etc., etc. and, don't forget the nation’s citizens paying taxes to their governments in order to make all this open data possible.
 
As a map aficionado, I have never had a problem getting motivated to collect and play with map data. It really can be considered a hobby of mine over the years. This hobby then turned into a core capability of Bending Time, which is the acquisition, validation, review, assessment, cleaning, reprojection, datum shifting, tiling and the hosting of open-license maps on servers in Bending Time’s cloud.
Picture
Bending Time Map Status 2017
​Sometime after I put the company on hiatus, I was forced to shut down the virtual servers and virtual disks that hosted Bending Time’s map data. But I still maintain the data on external hard drives as that data represents hundreds of hours of download and processing time.
 
The data I collect varies from global datasets like NASA’s Shuttle Radar Topography Mission (SRTM), a staple digital elevation model (DEM) that still has value today, to local cities’ LiDAR and optical imagery like that which is available on the City of Vancouver’s Open Data Portal.
Picture
SRTM Elevation Tile
Picture
Vancouver Aerial Image Example
The last 20 years has seen significant and consistent growth of open map data across the globe as governments realized they benefit from releasing the data to the public even for commercial use. Easier access to the data reduces the friction for people creating value-added products and services. So much so that projects like OpenStreetMap are being used for commercial purposes (e.g., MapBox).
Picture
OpenStreetMap Example
​Making map data open to the public is a fantastic first step by the global geospatial community. However, the process from acquiring the data to end use in an application is still a laborious task. In the case of a 3D globe, or video game streaming content, the data must be cleaned, standardized, optimized and, in the latter case of video games, artistically brushed up. Moreover, streaming the data means it must be hosted in a server somewhere that someone needs to pay for. Ultimately, making this productized data available in the cloud must be associated with business. That poses a challenge for the open data community. But I am confident that we can find creative ways to monetize goods and services in this new 3D frontier.

Ad astra!
​
--Sean
0 Comments
<<Previous

    Author

    Sean Treleaven, Founder of Bending Time Technologies, building a lifelike virtual world.

    Posts

    -20240910 The Business of Bending Time
    -20240907 The Business of Virtual Worlds
    -20240827 Sun, Moon and Stars
    -20240821 Atmosphere/Sky Rendering
    -20240817 Forest/Ground Rendering
    - 20240703 PBR Tester
    - 20240630 Ocean Rendering
    ​- 20240609 Terrain Rendering
    ​​- 20240519 Earth Coordinate Systems
    - 20240513 
    It's Your Planet
    - 20240407 Hello Again, World


    Categories

    All

    RSS Feed

Copyright © 2025 Bending Time Technologies Inc. All rights reserved
  • Home
  • Services
  • Blog
  • About