Friday, November 26, 2010

More unexpected progress

While my better half was preparing the traditional thanksgiving meal, I made myself useful by indulging in another programming spurt and creating the graphic user interface part needed for 'node based compositing'. I am really impressed with the flexibility of the Qt programming library, and the wealth of examples out there. I started out with an open-source mindmapping program, lobotomized most of it's code, leaving only the code in place that provided a basis for what I want. Then I expanded the code to include Node based interfaces, with Pin based I/O, and added the accompanying graphics widgets for it. After a day or so of fiddling around, here is the result:

The following features are fully functional:

  • Wrapping Node based interfaces in graphical widgets,
  • Wrapping pin based I/O components in graphical widgets,
  • Selecting, moving and deleting widgets,
  • Zooming and panning the widget canvas,
  • Connecting I/O pins based on their value types

The following features still need work:

  • At the moment you can only add nodes, and not remove them.
  • It is possible to make cyclic connections. This should be prohibited.
  • Auto-arrangement of a node and all the nodes it connects to through all its output pins.
  • Decide on whether I will keep the undo/redo feature, or just remove it. Even though this node editor supports it, it will be a lot of work to support it throughout the entire editor. I think. Or better said: I need to think a bit more about that ;-)

Although I had to spend a lot of time refactoring the existing code after I had lobotomized the mindmapping example code, I am still impressed with how quick you can get something like this up and running. In the past I had written a graphic interface like this in Java (to practice my programming skills I spent about 80 hours implementing a digital logic drawing board and processor), I guess that helped a lot with understanding what the existing code did and did not do. But again... unexpected progress!

... to be continued.

Monday, November 22, 2010

Unexpected progress

This post is quite unexpected, as is the development progress that is reported in it. In the last two weekends I had "a few" hours to spend, which of course I spent chatting up a C++ compiler. I am utterly amazed at how fast things have progressed since the previous post: at the time I had only the intentions to start rewriting the GUI of my 3d tool (using Qt this time), but now I can report some real, actual ~progress~. Below is one of those "pictures that says a thousand words".

Basically, I have the following features working:

  • Configuration of options, such as which render system to use (DirectX or OpenGL). This is not shown in the screenshot.
  • Logging subsystem, shown in the screenshot.
  • Resource inspector, this static list shows which resources (textures, shaders and so on) are currently in use.
  • Parameter editor, each parameter in this list can be edited using an appropriate editing widget. So if you click on a texture parameter, you will see a list of textures. If you click on a Vector3, you can edit its 3 values. And so on, but you get the idea.
  • Minor features that have been implemented: load/save dialog, playback controls, texture browser

I expected that it would take me weeks (!) to get to this point, in reality it took me 'a few' hours (meaning, I didn't get much sleep in the last two weekends). I got really lucky that I managed to find some really useful programming examples of how to "get things done" using the Qt framework :-)

Next on the todo list is to start working on the timeline-based sequencer/editor (which allows you to place modules, that draw 3d scenes, on different layers of the timeline), followed by the 'node based compositing' part.

(Note: an example of a timeline based sequencer can be found in my November 5th post, in the screenshots for Version 3. Here are two reference screenshots, one of node based compositing in Blender 2.5. I am aiming for something similar. The other screenshot is from ShaderFX and shows how node-based compositing can be used to replace low-level shader programming. In this way you don't need to be a programmer to create 3d animations and/or shaders)

There is a small chance that I will reverse this order, as I have some example code that may help tremendously with creating the node based compositing GUI, and I may be able to use this tool for next semester's 'principles of biological modeling' class. However, I will definitely not be able to work on this in the next 3 weeks, as I am ~swamped~ with end-of-semester work for my graduate study at Brandeis University.

... to be continued!

Friday, November 5, 2010

A brief history of my 3d editor programming efforts...

I wanted to devote one post to providing an overview of the evolution of the 3d editor that I am working on. While engaged in your passion it is easy to lose track of how time passes by. While writing this post it struck me that a number of years have passed since I started working on this editor. Anyway, a rough functional distinction between the various versions can be made:

  • Version 1: Using C++ for direct control of the DirectX API, and Microsoft Foundation Classes to create a GUI
  • Version 2: Using C# to build an engine on top of managed Ogre as a rendering component, and use WinForms to create a GUI
  • Version 3: Using C++ to build an engine on top of Ogre as a rendering component, and glue that to a WinForms based GUI programmed in C#
  • Version 4 (in development): Extend the C++ engine to support node based compositing (besides timeline based compositing), and use Qt to create a GUI. Also, this version will be platform independent.

The remainder of this post will provide some basic information about each of the versions, accompanied with some screenshots.

Version 1: initial experiments and the beginnings of a game editor

It all started with some simple experiments done in C++ (using Microsoft Visual Studio 6) and the DirectX API, I think version 7 or 8 at the time. I had made a few simple programs which loaded and displayed a 3d scene, experimented with simple generation of landscape meshes and so on. When that went well, I went on to programming more elaborate routines, all of which basically were aimed at speed-optimizing rendering of a large number of triangles:

  • Octtree based spatial subdivision, to determine which parts of a scene are on-screen and which ones are off screen
  • Occlusion culling, to minimize the rendering of triangles that are further away from the camera and will be obscured by triangles closer to the camera
  • Collecting all rendering operations into batches, and rendering those.
  • Rendering meshes with adaptive level of detail: the further away from the camera, the less triangles will be used to display the meshes.

The screenshot below shows the octtree based spatial subdivision at work:

The screenshot below shows the occlusion culling routine at work. When the view of the city is obscured by a wall (top image) the geometry behind the wall is not rendered. As the camera rises above the wall (center images) more octtree cells containing geometry become visible. The bottom screenshot is included just as a reference to what the city model looked like.

I also built a graphic user interface around these features using Microsoft Foundation Classes (MFC). Below are a few screenshots of this version of the editor...

At this point in time I looked back at what I had accomplished and realized a few things:

  • The direction that my editor was going in was more toward a game (level) editor than toward a 3d visualization editor. I did not like this direction. At all.
  • Even though I thought that programming low level routines for rendering large amounts of triangles was a nice challenge, I did not want to spend my time "reinventing the wheel" as there are plenty of other solutions available. I started considering dropping in an existing open-source 3d engine to handle all the rendering aspects, so that I can focus on creating the editor that allows the workflow that I have in mind.
  • Developing graphic user interface components in MFC took up a lot of time, more then was necessary. I started to consider switching to other environments that are more friendly to develop your own user interface components

Ultimately this resulted in my first complete rewrite of the editor...

Version 2: The limitations of C# and .NET

After some deliberation I had made a few decisions:

  • Use existing open source graphic rendering engines. I settled on Ogre3d because of its completeness in features, documentation and examples.
  • Use C# and .NET for the graphic user interface, because that is what I had experience in.

Here are a few screenshots showing the graphic user interface. Notice that most of the panels (as discussed in the previous blog post) are implemented: editors for timeline, scripts, parameters, splines and so on.

The decision to use C# had its good and its bad sides. The good side is that the ideas that I developed for how to structure the engine and the GUI are more or less completely intact in the current version of the editor, and that it did not take much time to implement all of them. The bad side is that I built everything in C#, which included using a wrapper for Ogre. Let's just say that at the time I did not think straight and was not able to extrapolate the limiting consequences of that decision. The most important of which were:

  • Because of using a wrapper for Ogre, it always took a while before updates to Ogre were supported in updates to the wrapper code.
  • The wrapper only wraps Ogre, not all of the additional plugins that you may want to use (e.g. OIS for accessing keyboard, joystick and mice from your code)
  • If I wanted to make changes to Ogre, I had to make changes to the wrapper code as well.

So, after a few weeks of working on this version, I labeled it "a can of worms" and started considering yet another rewrite...

Version 3: trying to mix things that really should not be mixed

For some reason I came up with the idea that I could get the "best of both worlds" by using C# for creating the GUI, while using C++ for creating the engine. I would have to glue those two together using a tool called "Simplified Wrapper and Interface Generator" (SWIG). This part of the process did not take me that long. Finally I had direct access to all the goodies (plugins, examples etc) made available by the Ogre community, while creating custom GUI components was quite easy in .NET. At least that was what I thought... but more on that after the mandatory screenshots below.

While working on Version 2 and Version 3 of this editor, I had enrolled in a graduate study (M.Sc. specializing in Human Computer Interaction) at the Delft University of Technology in the Netherlands. I was lucky enough to be able to use this editor as a basis for my thesis concerning 'patient motivation in virtual reality based neurocognitive rehabilitation'. This proved to be an excellent case study to test my editor and engine. In order to create a game based rehabilitation exercise, I had to finalize or add a few bits and pieces to the engine:

  • Saving and loading of engine states,
  • Sound (using the FMOD library),
  • Saving and loading of XML files, for data or configuration input and output,
  • Get keyboard, mouse input to work,
  • Get Wii Remote input to work, not only to use as a game controller but also used for headtracking,
  • General robustness of the editor and engine, so that it runs reliably and stable on different computers.

The results from this work can be found here [] (this includes some screenshots of the final game, and a video that shows how patients would interact with the system using pointing and headtracking mechanisms).

However, I realized that there are some major downsides to having the editor/engine code separated across C# and C++. The most important one is that it is quite limiting to not have direct access to engine functionality from the editor. Everything has to be handled by proxy wrapper classes (generated by SWIG). There were several options to work around this, including to generate wrapper code for Ogre myself, re-integrate MOgre, or accept the current limitations. A discussion with the author of Yaose, a script editor for Ogre, pointed me in the direction of the Qt framework for creating GUI's. In my opinion this library is more flexible than C# and WinForms for creating GUI's, it would allow direct access to the rendering engine without the need for creating wrapper code, and as a bonus, it runs on multiple platforms. Since Ogre also runs on multiple platforms, this would mean that in theory, my editor could be configured to run on many different platforms (Windows, OSX, Linux, IPhone, ... ).

Version 4: Doing It Right(er)

So recently I started yet another rewrite. But as with Version 3, it is a partial rewrite and partial extension of the codebase. This time I need to rewrite the GUI, but there is plenty of Qt example code available to get me started (a few days of work on the Qt version got me to the same point as a few ~weeks~ of work on C#/WinForms). Furthermore the engine codebase needs to be extended to allow node based compositing next to timeline based compositing. I am also contemplating setting the codebase up so that the engine can be separated from the underlying graphics rendering framework- so you could directly access OpenGL or DirectX instead of using Ogre as replacement middleware. This would facilitate using the editor for demoscene productions. I expect that version 4 of the editor will be operational and ready to be used in the summer of 2011.

However, I don't have any useful screenshots to show yet... And it looks like that due to university schedule I will not be able to spend much time on programming (if any) in the next few weeks. So the next major update will have to wait until the end of the year...

... but at least, progress is still being made!