Glossary of Terms, Phrases, Techniques and Practies Related to 3D Art Software
Artomatix has created the world’s first AI powered software to automate 3D content creation. Our technology was introduced first to Game Design but its uses has since expanded to the industries of Textiles and Fashion, Interiors, Automotive and Product Design. Our team has huge experience in this field but we understand that most people are new to the technologies that drive 3D Art or Product Design.
Our team has studied and worked with 3D Art Software for years. However, a lot of people have arrived at 3D Art Design from completely different backgrounds. It is understandable to be unfamiliar with the terminology related to this relatively new field. As we often get asked to clarify different concepts related to the industry we decided to create a Glossary of Terms. This glossary will be expanded upon regularly. We hope you find it of use.
Automatic Seam Removal
A texture that has been captured from a camera or downloaded from the internet usually has seams. These seams are the borders of the image, that when placed next to each other have noticeable areas where they do not continue.
They must first be made seamless before it can be repeated or tiled over a mesh. This has traditionally been a manual task though ArtEngine seeks to automate it.
What is an “Example-based” workflow?
Example-based working means starting with an initial inspiration and allowing artificial intelligence to imagine seemingly endless possibilities based on one input.
The Texture Mutation feature within ArtEngine does this to perfection! Performed in under a minute, the solution brings greater variety and life to the environments you create. This is one of 50 nodes within Artengine helping Artists create 3D content quicker than ever before.
See the entire Texture Mutation process in action in the video below. With Texture Mutation you can generate seamless variants from an input material and grow it to your desired size. Given a material, ArtEngine can mutate it and grow (or shrink) it to any size. As a result, you can always synthesize a texture with the desired resolution and texel density relative to the real world. This entire process provides the perfect example of an “Example-based” workflow.
HDRI stands for High Dynamic Range Imaging. To understand HDRI we must first look at how traditional 8-bit bitmap imagery stores color information. This is achieved in Red, Green and Blue channels. Each channel stores the values of the pixel between 0-255, in total this gives us a little over 16 million values of color we can represent. While this is a large number it is much lower than the human eye can actually perceive.
In contrast, HDRIs not only store color information, but also luminance values. In this manner, a more accurate approximation of human vision can be achieved.
In a 3D context, HDRIs are often used to generate lighting when rendering a scene. As they contain constantly changing values of color and light they are an efficient way of generating variance without placing a great many lights manually.
The strength of the dynamic range at play can be seen in the GIF-generated video below.
Material or texture generation is a method of gathering texture maps from a given input. This is a widely spread technique and there are many options to choose from when looking to create materials.
The most important maps in this workflow are the normal and height maps. Both of these can be generated from a single colour image although the quality of the results are not as high as those offered by scanning. Along with normal, and height maps, ambient occlusion maps are often frequently created.
PBR, or physically based rendering, is a firm standard within the games industry and is growing in popularity beyond it. It is driven by a need to have a level of standardization when making and rendering art.
It is a system designed to incorporate real-world measurements of materials to ensure that the physical laws that govern our world also control our digital materials. This helps guide artists in having the greatest sense of reality in their work while taking as much ambiguity from the texture creation process as possible.
The PBR standard is not limited just to texture creation principles but also concerns lighting and shading standards. This can be seen in the image below, the same textures are placed into four different environments and react correctly in each with no further manual tweaking required.
Photogrammetry is a technique in which detailed 3D meshes of an object are generated from a set of overlapping photographic images captured of the object at different angles. From the meshes, texture maps of impressive detail can be baked from the object.
It’s a process that has been used for decades, particularly for topographic mapping of geographies used in geological surveys. As consumer level hardware and processing power have increased, the techniques have become available to a wider range of users, particularly the VFX, Industrial Design, and gaming industries.
Photometry is a technique in which the surface normals of an object are captured. This is achieved by taking images of the object with differing light conditions. This produces much better information than traditional single image texture mapping. However it does require a very controlled studio lighting setting. In general, the more light angles the better for normal map construction.
Rendering is the process of converting: 3D geometry, texture information, and lighting of a scene to a simple 2D bitmap using a piece of computer software. The final two dimensional image is often called a render.
There are many different software packages, both proprietary and open source that provide this action. In large they can be split into two categories; the slower though more accurate pre-rendering or offline rendering seen for CG films, and the much quicker though less accurate real time rendering seen in games.
A texture map is an image file that is applied to a 3D model to convey texture information and to ensure that the mesh reacts correctly to light. Maps can control colour, light angle reflection, mesh depth amongst some other elements.
A mesh can be thought of as the shape of an object, while texture maps add detailing about how it appears, and interacts with its environment.
Texture synthesis is a manner in which larger digital textures can be formed by analysing an example and adhering to its structural and colour content. It can be applied in a few manners, including to fill problem areas of a texture, or to grow images to larger size resolutions.
In the image below you can see three unique variants synthesised from a single set of texture maps.
There are a collection of terms; up-res, supresolution, upscaling, that all refer to the same action – taking an image and bringing it to a higher resolution.
Traditionally this was achieved through signal processing methods, and while they can achieve some satisfactory results, they don’t perform as well as modern AI, typically incurring a loss of information and some blurring.
Neural networks are trained to explicitly add in semantic information as opposed to merely stretching the image.
The below GIF-generated video shows an image upressing from low to high resolution.