Welcome, I’m your Oracle. What would you like to know?
Those promising first lines announce what is to come. A virtual avatar who will walk you through a journey of knowledge and discovery.
This was our first take, a prototype if you will, in creating ways to communicate global data on the ocean possible futures and the scientific models underneath the predictions. As part of the NF-UBC Nereus Program.
“We use a 3D gaming engine as an interface for the scientific models in order to present a science-based view of how the oceans once looked, how they look now, and how they may look in the future depending on our actions.” V. Christensen
This interactive presentation is part of a few years of research and some trial and error on better ways to communicate our science. We realized that a very powerful way of communicating quantitative differences in scenarios was through a split screen and a continuous camera. The principle is based on having a constant baseline we can compare the simulated future scenario against.
For this installation we have a single yes/no question (“do you want to limit fishing for top predators?”) and thus two possible outcomes. We see this as an open end framework we can feed with new questions and visualizations according to the topics of the time. At some point we can use the globe to explore the geographic aspects of the issues as well.
Or we may change everything and start from scratch. It’s really hard to tell at this point, but so far the response has been quite positive.
A screen capture of the whole interaction can be seen on youtube:
What is going on under the hood
Blender – an open source 3d tool – was used to produce all the graphic elements of this application. The application is built on top of the Blender Game Engine – a 3d game engine that is part of Blender. All the models and animations were made in Blender and the same goes for the videos.
Oscar Baechler, an artist from Seattle, joined us for two weeks to work in the oracle. His role was character concept, modelling, texturing, rigging and animation. It was quite a relief to have him on the team while I could focus on the programming, the rest of the artwork, videos …
The animation workflow was built on top of Papagayo. This is a standalone software that helps converting recorded dialogs into animated shapes over time. It works quite well for lip-sync. The original add-on for Blender was designed to control shape keys. This was a bit overkill and doesn’t work in the game engine. Thus for this project I had to change it to apply the transformations straight to hardcoded bone channels. Part of the patch was already committed back to Blender. Soon Oscar and I will create documentation for this workflow and share a sample character ready to use.
For cut scenes in the game engine I controlled the playback of the animations (for the camera and the oracle) through an Action Actuator in the Property Mode. The property used (frame) was updated with the current position of the dialog being played. The audio can be played using the python module audaspace or with a sound actuator. The key here is to use the current time of the audio multiplied by the animation frame rate: object["frame"] = time * 30.0
Look at me
In order to have the oracle looking at the camera, she has a neck bone, set in a Damped Track. For every animation we set a different influence factor according to how much we want she to look straight to us. This is/was not currently supported in the game engine and the same goes for controlling the influence of the constraint. Despite all the rush we could afford to allocate some time for development. After a few copy and pastes I expanded the Armature Actuator to support Set Influence and the Damped Track. The patch with a test build is waiting for peer review here.
Click on the image to see the complete Logic Brick setup of the oracle.
The videos have a clever setup. The three scenarios (present/future yes/future no) are in fact one single file. I’m using python to switch the current scenario through a simple button in the interface (we got to love Blender Python capabilities). By doing so, the multiple particle systems (i.e. reef and small fish) have their population changed accordingly to the scientific data feeding the visuals. Also the camera border is set (so we render only what we need) and the colour of the water and other post processing settings (i.e. composite nodes) change as well. For the big fish and the turtle we are simply hiding the ones we don’t need for the current scenario.
This almost works out of the box. We had, however, to patch Blender to force the particle system to render exactly what you see in the screen (usually a small amount of the particles to avoid overloading the graphic card during the pre-render work). We could simply have different particle systems with different population numbers. The problem is that once you change one single particle all the arrange turns out different. So what we needed was simply to hide more or less of one specific arrange of a particle system.
We have been using this patch for quite some time now, but I’m yet to work with Janne Karhu (Blender particle developer) to push this into trunk.
Future Yes: more big fish, less small fish, turbidity 1.0
Future No: less big fish, more small fish, turbidity 0.2
If you want to see the videos individually, please visit:
I was to post a video on CBC Canada covering the event and airing a good part of the oracle presentation. However the video is no longer available. I wrote to the news network to see if I can get hold of it to re-share here. In the mean time I gathered some links
Here’s a question I’ve been wrestling with this week: is the recent surge in crowdfunded projects a good thing for the Blender community or not? I love the concept of crowdfunding, and it works amazingly well for the Blender Institute’s … Continue reading →
A technical test, using 4k footage from real Amsterdam, tracked, integrated with some cg elements, all rendered in Cycles.
Of course being a test, we focused on some aspects and moved quickly on to others, so the modeling is a real simple kit-bashing, but still, kits are incredibly useful and a lot can be done with a small number of beam/girder/truss modules.
These use materials similar to the greeble objects in an earlier post, but even simpler: no AO bake, just an image with various B/W gradients, then each piece is UV mapped on one of those to set rusty/shiny parts, then tileable textures using a cube map projection.
The matching part (materials and light) is always interesting: ok, it is unforgiving, and spotting differences in extended objects (stone, brick) is very easy, but on the upside the ‘real’ objects in the footage make an amazing reference that really helps to improve the realism. Simply .. you don’t need to think that much or look for references, you pick something that doesn’t match and improve it :)
[[Show as slideshow]]
The shot was tracked and reconstructed by Sebastian, who also set up layers and masks for rendering in BI, then I did the modelling and to render in Cycles we had to rethink it without masking layers, so we tried a few setups and eventually found a working one with the help of Kjartan.
The main question was managing the reconstructed parts (..the real objects that need to cast and receive shadows and AO from cg objects) the final workflow for the project will be likely very different, but for a start it wasn’t too scary :)
It will be a lot easier to do this kind of compositing when we get shadow and AO-pass, as well as mask-layers.
Once we have managed rendering this, we’ll post the finished tracked sequence. Will be interesting to see how that looks like! Will there be jitter? Will there be sliding? Fireflies?
In a few days we’ll know more!
In this tutorial on compositing and camera tracking in Blender, Greg Zaal of Blender Nerd will take you through the complete process of placing digital, 3D assets into existing footage. This tutorial series will be making use of Blender’s modeling tools, texturing and rendering in Cycles and even using the new Camera Tracker to match the camera movement for a believable VFX integration.
What you’ll learn
In the final part of this series you will learn how to cover up the markers in the footage that were placed as tracking points so that we could track the footage properly, resulting in a clean 3D composite.
By julioras julioras writes: hey guys alright,this is my last project I am very happy with the resul do cycles,I hope you like it too… Blender 2.62+Cycles. I used evironment texture HDRI as lighting,she’s a bit dark because it is … Continue reading →
Christian Ochsner shares some insight into three file formats: OBJ, Collada and Alembic. Christian writes: I thought it might be helpful to some people to laern something about the new Alembic file-format and how it differs from OBJ and … Continue reading →
Paweł Kowal has created a patch for a ‘UV Offset Modifier’. He ‘s looking support to get his code into the Blender ‘trunk’ (and has created an absolutely wonderful presentation!). Paweł writes: I’m working on UV Offset modifier for Blender. … Continue reading →