tag:blogger.com,1999:blog-63083048605473497022024-03-13T22:38:45.890-07:00Preserved for Digital EternityA development blog for the virtual reality software that I am working on.Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-6308304860547349702.post-50861461903681481432011-11-06T16:52:00.000-08:002011-11-06T17:06:51.600-08:00More progress :-)<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
<span class="Apple-style-span" style="line-height: 22px;">Here is a very brief update on what I've been doing lately, in no particular order:</span></div>
<div>
<span class="Apple-style-span" style="line-height: 22px;"><br /></span></div>
<span class="Apple-style-span" style="background-color: #441500; color: #ffeedd; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 14px; line-height: 20px;"></span><br />
<ul style="list-style-image: initial; list-style-position: initial; list-style-type: disc; margin-bottom: 0.5em; margin-left: 0px; margin-right: 0px; margin-top: 0.5em; padding-bottom: 0px; padding-left: 2.5em; padding-right: 2.5em; padding-top: 0px; text-align: left;">
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: initial; border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Started working on a deferred particle renderer node, but had to cease the work after some unexpected/unexplained GLSL shader behavior: a vertex texture fetch only seems to be able to access one or two texels in one corner of the texture. I am not sure if this is because of a bug in my code, Ogre 3D, OpenGL/GLSL, or Parallels which I am using to host a virtual WinXP development environment on my MacBook Pro. Posting a question on the Ogre 3D forum did not yield any response; and I currently do not have a working Ogre 3D example that uses GLSL and vertex texture fetching. I think my next step will be to port all my GLSL shaders to HLSL or CG, and see if that makes a difference.</li>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: initial; border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Extended the tool and framework with support for stereoscopic rendering (currently only using Direct3D since the OpenGL drivers do not seem to support multiple monitors on Win XP): </li>
<ul>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: initial; border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Dual output, for use with Head Mounted Displays, or two beamers with polarized filters,</li>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: initial; border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Anaglyph, for use with a single screen and red/cyan glasses,</li>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: initial; border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Autostereoscopic, for use with 3d TV's that support horizontal, vertical or checkerboard interlaced signals.</li>
</ul>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: rgb(153, 136, 119); border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Worked on a proof of concept demo for my academic advisors. Basically this entailed making sure that a simple landscape 'flyby' scene works, and that my Delft University of Technology thesis project works. </li>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: rgb(153, 136, 119); border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Added exception handling support to the codebase, which prevents the tool from crashing without any sort of warning and/or log message. For instance, if you load a previously created visualization from file, and an error occurs during loading, an error message will be shown and the affected modules and nodes will not be loaded.</li>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: rgb(153, 136, 119); border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Switched to Ogre 1.8. Using CMake build system now allows me to generate builds for Windows or Mac, using Ogre 1.7 or 1.8, and using a variety of types and versions of compilers such as microsoft visual c++, code::blocks, eclipse, gcc, xcode and so on. I still want to switch to using XCode and Mac based development environment, but my familiarity with MSVC's debugger is what keeps me coming back to using Windows. For now.</li>
<li style="border-bottom-style: none; border-color: initial; border-left-style: none; border-right-style: none; border-top-color: rgb(153, 136, 119); border-top-style: none; border-width: initial; line-height: 1.4; margin-bottom: 0.25em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-indent: 0px;">Last but not least, I did some testing, testing and then some more testing. The shader editor seems to work OK now, including on-the-fly compiling and reloading of shaders. Yay :-D</li>
</ul>
<div>
<span class="Apple-style-span" style="line-height: 22px;"><br /></span><br />
<span class="Apple-style-span" style="line-height: 22px;">Since I will be using this tool in my PhD research experiment(s), my todo list is partially dictated by what is needed to support that. But that should not amount to having to develop more than a node or two. In the mean time, I will be busy with testing the tool and framework, and working on the deferred particle renderer. I will get those @#$@# particles to work- I am not ready to give up just yet ;-)</span><br />
<span class="Apple-style-span" style="line-height: 22px;"><br /></span></div>
<div>
<span class="Apple-style-span" style="line-height: 22px;">... to be continued ...</span></div>
<span class="Apple-style-span" style="line-height: 22px;"></span><br />
<div>
<span class="Apple-style-span" style="line-height: 22px;"><span class="Apple-style-span" style="line-height: 22px;"><br /></span></span></div>
<span class="Apple-style-span" style="line-height: 22px;">
</span><br />
<ul style="line-height: 1.4; list-style-image: initial; list-style-position: initial; list-style-type: disc; margin-bottom: 0.5em; margin-left: 0px; margin-right: 0px; margin-top: 0.5em; padding-bottom: 0px; padding-left: 2.5em; padding-right: 2.5em; padding-top: 0px;">
</ul>
</div>Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-38969264358089114722011-10-02T19:26:00.000-07:002011-10-02T19:49:31.938-07:00Reusable nodes: a primer<div dir="ltr" style="text-align: left;" trbidi="on">
Less than a year ago I started a major rewrite of the program. My main motivation was to switch from using .NET to C++ as the main programming language, and from WinForms to Qt for creating the graphic user interface. Another major goal was to be able to create visual content using both node and timeline based compositing techniques. This weekend I ended up implementing one of the last major required features of this revamped framework: that of reusable nodes. The idea is that if you have created visual content in one place in the program, that you can reuse that content in other places.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-kj0YsnPlsdA/TokaObIjn0I/AAAAAAAAH5U/H9aNH1ZP3ts/s1600/proxy%2Bnode%2Btest%2Boutput.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="271" src="http://4.bp.blogspot.com/-kj0YsnPlsdA/TokaObIjn0I/AAAAAAAAH5U/H9aNH1ZP3ts/s400/proxy%2Bnode%2Btest%2Boutput.png" width="400" /></a></div>
<br />
The screenshot above shows a simple example. This image shows a single 3d scene drawn in 3 different ways: without any effects (left), with a 'glass' postprocessing effect (top right), and with a 'nightvision' postprocessing effect (bottom right). In the bottom left it shows how various nodes are interconnected to construct the 3d scene. The four leftmost nodes comprise the scene elements (space shuttle, hubble telescope, astronaut, and a light source), which are all connected to a scene node shown near the center of the image. This scene node connects to a render node, which draws its contents to the screen.
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-hPJp4hSuBC4/TokT2LfiXUI/AAAAAAAAH5M/f-6BPuNFuuo/s1600/module_postprocess.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-hPJp4hSuBC4/TokT2LfiXUI/AAAAAAAAH5M/f-6BPuNFuuo/s400/module_postprocess.jpg" width="295" /></a></div>
<br />
The screenshot above shows how each of the postprocessing nodes references the same scene node. The original scene node's content is used as input for both of the effects ('glass' and 'nightvision') that are drawn on the screen. Any changes made to e.g. the space shuttle mesh node will immediately be reflected in all renderings of the scene, since they all share the same scene node.
The framework allows any type of node (e.g. a RenderTarget node) to be shared in such a way, even though it may not make much sense to be able to share each type of node (e.g. a Light node or a Mesh node).<br />
<br />
<br />
Even though the core functionality related to sharing nodes works, including saving/loading, there still is much work to do. Mainly with regards to testing: creating scenes, adding meshes, animating meshes, then re-using these scenes multiple times during post-processing steps. I am sure that this involves a lot of program crashes and necessary bug fixes. However I am fairly confident that the meat of the code is good now, I do not expect to be making many structural changes while preparing the framework for actual use in either my research or a demoscene production :-)<br />
<br />
<br />
Other issues that I've been working on are:
<br />
<ul>
<li> GUI: added syntax highlighting for compositor and overlay scripts
</li>
<li> Engine: moved away from Cg in favor of Glsl. I just ran into too many related issues- not just bugs in the Cg compiler, but also limitations of running a Windows development environment in a virtual machine on a Mac.
</li>
<li> Switched from using VMWare to Parallells for running Windows in a virtual environment. VMWare imposed severe limitations on the graphics hardware features made available in the virtualized environment. Basically I was not able to use any deferred rendering, until I switched to Parallells.
</li>
</ul>
<br />
As usual I'll end this post with a small to-do list, in no particular order:
<br />
<ul>
<li> Testing, testing and more testing
</li>
<li> Add support for a <a href="http://rocket.sourceforge.net/">GNU Rocket</a> node. However since my GUI already supports editing of keyframe animation (position/orientation/scaling of a mesh or camera), I may generalize that functionality so that you can keyframe any floating point number. I think that with relatively a few lines of code I can create the same functionality as GNU Rocket. What I end up doing depends on which of the two options is the most effective (resulting functionality vs. effort required to implement it).
</li>
<li> I still dream of deferred particle renderers... now that my development environment actually supports them, I can actually start working on it :-)
</li>
<li> More testing.... Will it ever end?
</li>
</ul>
... to be continued ...
</div>
Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-85891301470165449062011-08-17T19:31:00.000-07:002011-08-17T20:03:34.737-07:00Summertime!During the summer I did not spend as much time on this project as I intended to. Mainly because family and friends were visiting us. Another reason was that we're still not entirely settled in our new house; we often had to go out to the stores to hunt down furniture/tools, and we spent a lot of time working on house remodeling / in the garden and so on. However the little time that I managed to spend developing the tool was spent effectively. Here is a list of what I got done:
<br />
<br /><ul>
<br /><li><b>Platform independence</b>: I am using CMake to manage build systems, which means that it is now possible to compile the tool for either Windows (32/64 bit) or apple osx (32/64 bit, Intel, PPC) using a variety of compilers (Visual Studio, XCode). It took me a while to learn the (quirks of the) CMake scripting language, but it was time well spent.
<br /><li><b>I decided to drop the use of Cg</b>: Previously I was using NVidia's Cg language to write shaders in, most importantly because it is able to generate both Glsl and Hlsl shaders based on the same Cg code. What I did not take into account was the bugs that Cg itself contains. For instance, some Glsl shaders which I had ported to Cg, work when compiled to Hlsl but not Glsl. With other shaders it is the other way around. After looking around on the internet for possible causes, I learned of the bugs in Cg. Even though I will keep support for Cg (and DirecX) in the tool, from now on I will write all shaders in Glsl (and use the OpenGl render system)
<br /><li><b>Investigated other graphics rendering middleware</b>: I briefly investigated using another graphics rendering library (Cinder) as a replacement of Ogre. I think I could reuse most of my tool's code (both the graphic user interface and the underlying engine code) and that it certainly is usable, however I decided to not pursue this any further. Although Cinder is a young and promising library, it is aimed more at 'creative coding'. Although that is in part the use that I have in mind for my tool, it is quite possible that I will have a need for my tool to do more than 'just' creative coding. Also, even though Cinder works just fine 'out of the box', as soon as you make a few changes (e.g. compile to a shared dll instead of a static library) the proverbial shit hit the fan. Although I lost little over a week on experimenting with Cinder, I decided to stick with Ogre (for now).
<br /><li><b>Several small functional updates / code refactors</b>: I made some small changes in the types of nodes that are available. Instead of having several camera nodes connect to a single scene node, each scene node now exposes its own (single) camera object. You can animate (keyframe) the camera object, and by connecting an Animation-multiplexer node, you can switch between any 1 out of up to 10 connected animations for the camera object. This allows you to switch between several camera viewpoints within the same scene. As a result of this and other changes, the way that functionality is now divided over several nodes looks and feels much 'cleaner'. I think this will work out just fine!
<br /></ul>
<br />
<br />Here is a list of things to do, in no particular order:
<br /><ul>
<br /><li><b>Test & fix bugs</b>: I fixed a bug that has been present for a long time. But by accident I also discovered a new one: in the undo/redo system. Even though I have an idea about how to fix the bug that I came across, I need to test this more extensively before I add new features to the tool.
<br /><li><b>Deferred particle renderer</b>: I have not done much work on this yet, even though I intended to have it finished by the end of the summer. I did spend some time looking at the math that is involved though, but I lost track of the notes that I took of it. Oh well.
<br /></ul>
<br />
<br />Unfortunately, the academic year is also about to start. Luckily I am not taking any courses this year (at least, none for which I need to take an exam) so I may still find plenty of time to work on this project. It is now much more likely that I will end up using this tool in my PhD research, but I'll talk about that a bit more when the time comes...
<br />
<br />... to be continued ...
<br />
<br />
<br /> Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-87760158291423208882011-06-19T15:11:00.000-07:002011-06-19T16:23:10.834-07:00@Party 2011 resultsIn this post I will detail the results from two days filled with coding during <a href="http://atparty-demoscene.net/">@Party 2011</a>[atparty-demoscene.net]. But first, a brief status update on recent progress. I made progress on two major todo items mentioned in the previous post: testing of general functionality (loading, saving, creating/connecting nodes and so on), and scene import. The latter however, did not quite go according to plan. To keep a long story short, I ended up using an off the shelf, free for non-commercial use, 3ds max/maya to Ogre exporter. This exporter is a) full featured, b) working, c) actively developed, d) documented, and e) supported. Rant: #$@#!#% open-source projects! The number of OSS projects that meet all the previously mentioned requirements can be counted on all fingers of a hand from which 3 or 4 fingers are amputated. Any advocate of open-source software (hereafter to be referred to as: 'delusional') who claims otherwise, basically is a hobbyist or bedroom coder who does not know what s/he is talking about. I will stop ranting here, I deliberately did not want this to turn into a rant blog. Although I may set up a dedicated blog in the (near) future, just for that ;-)<br /><br />Back to business. I did some simple tests to export (animated) scenes and objects from 3ds Max to Ogre, which worked fine. Since I do not have any 3ds Max skills, I left it at that, for now. But learning the basics of 3ds Max has been high on my list of 'things to do or learn before I die'. However I've always put it off because of the intimidating graphic user interface of 3ds Max (I'll spare you another rant here ;-). So in the next few weeks/months I will spend a few hours every week to try to grasp it.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-VdndpAAcKfw/Tf55A2lQF8I/AAAAAAAAHtc/g8iHoLGqRC0/s1600/script_editor1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 294px;" src="http://4.bp.blogspot.com/-VdndpAAcKfw/Tf55A2lQF8I/AAAAAAAAHtc/g8iHoLGqRC0/s400/script_editor1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5620062440563677122" /></a><br />Last week I spent two evenings implementing a script editor. The script editor allows you to edit, well, scripts! It supports syntax highlighting (currently only shaders in cg, hlsl, glsl syntax), but it can easily be extended to support all types of Ogre scripts (e.g. materials). Furthermore the script editor allows you to recompile scripts on the fly, while displaying a 3d scene. I finalized this script editor during @party, and to test whether it worked I ported some <a href="http://www.iquilezles.org/apps/shadertoy/">Shader Toy</a> [iquilezles.org] shaders. There still are some issues to work out, but they are indicative of bigger 'untackled problems': even though the tool can deal with script compilation errors, and as a result, just will not display any shaded geometry on screen, it is possible for Ogre to throw an exception when loading a (successfully compiled) shader. Exception handling is something I have not fully implemented yet, but which is high up on my 'todo' list. <br />Without further ado, here is my updated todo list:<br /><ul><br /><li>Testing, testing and more testing!<br /><li> Learn some basic 3ds Max or Maya modeling skills, so I can test scene export/import features<br /><li>Come up with an exception handling mechanism, to catch exceptions that originate from user-initiated actions. Such as: script compiler errors, setting invalid parameter values through the graphic user interface, and so on.<br /><li>Improve colorcurve editor: Replace the current graphic user interface, which requires you to animate individual red, green and blue curves to create a color gradient, with another interface where you can put colors at specific time points.<br /></ul><br />Note: I am giving myself a break from working on the 'nuts and bolts' of the tool, and started implementing a GPU based particle renderer :-) Between that digression, learning basic 3ds Max skills, working on and around the house, and the friends and family that are coming over to visit us during the next few weeks, I do not think that I will have much time to work on anything else. Oh well :-)<br /><br />... to be continued ...Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-51909482760408982402011-06-05T10:27:00.000-07:002011-06-05T11:30:56.907-07:00Testing, testing, 1,2,3...Note: this post is a followup of my previous post.<br /><br />Here is a quick status update. Basically I finished implementing the following functionality:<br /><ul><br /><li> <b>Keyframe node</b>: The graphic user interface now supports a basic keyframe editor, and keyframe nodes can be shared across different 3d objects in the same scene.<br /><li> <b>Postprocessing node</b>: There now is a separate postprocessing node for applying a single postprocessing effect to a scene that is either rendered directly from a camera node, or that was rendered previously and stored in a rendertarget node ("render to texture"). The node automagically exposes shader script parameters through the graphic user interface, so you can edit and/or animate them. This postprocessing node complements the legacy render node which by default allows you to 'stack up' as many postprocessing effects as you like- but (for now) it does not allow animating the shader script parameters.<br /></ul><br />I got these features to work in a matter of days, which I am quite happy about. The tool is rapidly nearing the point where it is actually useable by someone other than myself- since you should soon be able to create animations without having to change a single line of code. Ofcourse it is also possible to code your own effects (the tool includes a C++ software development kit (SDK) with a few examples) if you like. But, before that is really feasible, I will still have to work on the following list of to-do items:<br /><ul><br /><li><b>Testing, testing and more testing</b>: Including, creating a basic scene by placing and connecting nodes. Add some 3d models in the scene. Render the scene to the screen. Remove some nodes, add some other nodes. Keyframe some 3d models. Keyframe the camera. Render the scene to an offscreen rendertarget. Apply several postprocessing effects to the rendertarget. Save. Load. Modify. Save again. Load again. Repeat this process until the program crashes, in which case I need to debug the code and fix the problem, or until the program does not crash, in which case I can continue with the next item on this list. I think this will take me a few days at least, if I want to do this properly.<br /><li><b>Scene import</b>: Get a Lightwave exporter plugin up and running. Extend the tool to import scenes (3d objects, materials, animations) that were exported using the plugin. In fact, I am compiling a first version of the plugin while I am writing this post ;-)<br /><li><b>Improve the color-curve editor</b>: Replace the current graphic user interface, which requires you to animate individual red, green and blue curves to create a color gradient, with another interface where you can put colors at specific time points. This is much more user friendly than the current implementation.<br /></ul><br />What happens after this remains to be seen. It depends a bit on where my interests are. I see two main directions where I could go from here:<br /><ul><br /><li><b>Evolved virtual creatures</b>: Based on the work of <a href="http://www.youtube.com/watch?v=xiRhe8mL_08">Karl Sims</a> [youtube.com], and to a lesser extent, <a href="http://capped.tv/archee-darwinism">Archee</a> [capped.tv] I have since long been interested in creating my own version. More specifically I want to evolve neural controllers that are able to control any skeletal structure (2, 4, 6 or more legs/wings/fins). This would require me to extend the tool with two specific nodes: a <i>evolving neural network</i> node, and a <i>3d model with support for skeletal animation node</i>. <br /><li><b>Deferred particle rendering</b>: Fairlight's <a href="http://www.capped.tv/fairlight-blunderbuss0">Blunderbuss</a> [capped.tv] and later <a href="http://capped.tv/fairlight_cncd-agenda_circling_forth">Agenda circling forth</a> [capped.tv]. <br /></ul><br />I've been sitting on the papers that served as a basis for these algorithms for years now. It's about time that I did something for fun again- tool development can be a bit boring at times, staring at the same startup screen, with the same placeholder effects, because my priority was to get the tool up & running (while obtaining 2 masters degrees and emigrating from the Netherlands to the US. Go figure ;-) Implementing these algorithms is not that much work- note that Smash (the programmer behind Blunderbuss) said on his blog that he coded the effect in one Saturday morning. However history has shown that when I get it done I will spend considerable amount of time playing with the simulation. This has happened when I added support for L-systems (similar to <a href="http://vimeo.com/23184018">Outracks</a> [vimeo.com]), which actually was the first effect evah to be added to the tool. History repeated itself when I added 4d Julia fractals, based on the work of <a href="http://www.cs.caltech.edu/~keenan/project_qjulia.html">John Hart and Keenan Crane</a> [caltech.edu] and later <a href="http://www.youtube.com/watch?v=9AX8gNyrSWc">IQ/Rgba</a> [youtube.com]. <br /><br />I will have all the tool's functionality that was mentioned above, and either evolved virtual creatures or deferred particle rendering working by the end of the summer. Mark my words :-D<br /><br />... to be continued ...Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-13643556559518768862011-05-30T21:46:00.001-07:002011-05-30T22:20:53.439-07:00Keyframes & skinsNow that school is (finally) out for summer, I can turn at least some of my attention to my pet project. The past few days I implemented one feature from the todo list that was mentioned in my previous post. I created an Keyframe Animation node, which can hold animation data (position, scale, rotation) for 3d models, cameras, lights and so on. The graphic user interface has a very basic editor for it, as shown in the top left corner of the screenshot below:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/-CZMOweQA-ws/TeR44BdNCmI/AAAAAAAAHs8/3YI6RBdko5E/s1600/Untitled.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 236px;" src="http://4.bp.blogspot.com/-CZMOweQA-ws/TeR44BdNCmI/AAAAAAAAHs8/3YI6RBdko5E/s400/Untitled.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5612743939469412962" /></a><br /><br />Even though you can import Lightwave/3dsMax scenes and animations in the tool, I wanted to have a basic keyframe animation editor so that you can directly edit camera paths to be used in scenes that are not imported but constructed using the tool (e.g. containing completely code-driven effects, with generated 3d objects). There are still some minor issues to iron out though: I want to be able to connect a keyframe animation to multiple 3d objects, which is trivial to implement. However a slightly bigger issue is that if you reuse the keyframe animation in multiple 3d objects, each object will share the same position, orientation and/or scale. To mediate this, a 'time delay' node is needed, which does nothing else than ... manipulate the time (e.g. make it go twice as fast, twice as slow, or reverse). Although this should not be that much work it is an example of how this project is kind of a 'pandora's box': every time I implement one bit of functionality, I discover that other features are desirable to be able to make the best possible use of the original bit of functionality. There still is a ton of work to do before this tool is really functionally complete and stable enough to be used in a demo production. Fortunately, I still like to work on it :-)<br /><br />Another feature that was implemented, more or less by accident, is 'skinning' of the graphic user interface. By loading a 'cascading style sheet', which basically is a text file that specifies things like background colors, button images and so on, you can drastically change the appearance of the graphic user interface. Just compare the screenshot above with the screenshot from the previous post- this difference is accomplished by loading a text file. Since I really dislike the Windoze look&feel, I think this is kinda neat :-)<br /><br />For the record, here is a more or less prioritized and updated todo list:<br /><ul><br /><li> <b>Change ColorCurveEditor</b>: currently animating a color sequence requires animating the r, g, b curves separately. I don't remeber why I implemented it that way (probably because it was a quick fix at a time when I was still learning Qt programming), but this is not very useful. I want to change this so you can specify keyframes as colorvalues associated with time points. I may even ditch the curve display completely, and go for a table-based approach like the Keyframe Animation editor (Keep It Simple, Stupid :-D).<br /><li> <b>Scene import</b>: I am leaning towards deciding on using LightWave as a preferred modeling package. Not that I know how to model though- I think that in the past modeling software was too complicated for my taste- I just want a modeler and thats it. What I do like about LightWave is that the modeler and layouter (scene animation/renderer) are separate programs. I am leaning towards extending a LightWave export plugin to maintain tight control over it's functionality. This should provide me with a 'basic but acceptable' model/scene/animation/material exporter rather quickly, which I can improve extend over time. If I've learned one thing from many years of using open-source software, it is that 'if you want something done, you have to do it yourself'.<br /><li> Postprocessing node: A node that allows you to apply postprocessing effects (blur, bloom etc) to a rendertarget, while controlling and/or animating the effect parameters. Currently, postprocessing is supported but the graphic user interface does not allow animation of the parameters (meaning, you have to hardcode it).<br /><li> Animated mesh node: for loading and controlling a mesh with skeletal animation. See previous post for more details. This is more or less low priority, since the only need for it that I can think of at the moment is for making 'evolved virtual creatures' (which in turn would also require a neural controller node... there is Pandora's box again ;-)<br /></ul><br />... to be continued ...Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-46305028386571063862011-02-21T12:12:00.000-08:002011-02-21T12:59:36.009-08:00Progress!It has been a while since my last update, but that does not mean that I have not been coding away on my little hobby project. There has been quite a lot of progress; it just did not result in interesting screenshots showing off new features. Oh well, I guess I still will include a screenshot in this post, after a short list of what I have and have not done:<br /><ul><br /><li> Implemented some basic nodes: Camera, Scene, Light, Mesh, Plane, ParticleSystem, RenderTarget<br /><li> Render to texture: it is now possible to construct a scene and render it's output to a texture (= a rendertarget node). This texture can then be used as input to another render node.<br /><li> Testing, testing, testing and more testing. I've been randomly placing and connecting nodes, saving and then reloading them, disconnecting them, reconnecting them and so on, to check whether the program is stable and there are no memory and/or resource leaks. This is an ongoing (slow) process though, but it's getting there :-)<br /></ul><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/-jn5rauluUqI/TWLRed9zibI/AAAAAAAAHng/hW-Jtc50848/s1600/Developer%2BInterface%2B-%2BVR%2Benvironment%2Beditor.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 242px;" src="http://1.bp.blogspot.com/-jn5rauluUqI/TWLRed9zibI/AAAAAAAAHng/hW-Jtc50848/s400/Developer%2BInterface%2B-%2BVR%2Benvironment%2Beditor.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5576249610008234418" /></a><br />The following nodes and/or functionality still have to be implemented (listed in approximate order of importance):<br /><ul><br /><li> <b>Scene import</b>: import a 3d scene from one or more 3d tools (e.g. 3ds Max, Maya, Blender). Perhaps an intermediate format, such as dotscene, exists that each of these 3d tools can handle?<br /><li> <b>Animated Mesh node</b>: This node loads a mesh and it's accompanying skeletal animation structure. Through it's parameters you can then control which animation gets shown on screen (e.g. walking, running and so on).<br /><li> <b>AnimationTrack node</b>: This node can be connected to, and animate the position, orientation or scale any scene object (light/mesh/camera/particlesystem). This animation can be edited through the tool itself- which will display a list of keyframes and their associated values.<br /><li> <b>Neural controller node</b>: This node evolves a neural network to control any animated mesh. This is heavily inspired by the work of Karl Sims' work on <a href="http://www.karlsims.com/evolved-virtual-creatures.html">Evolved Virtual Creatures</a>, and the more recent 4k demo from Archee called <a href="http://www.youtube.com/watch?v=07XiEjlluZw">Darwinism</a>. The main difference is that these prior works use genetic algorithms to evolve the controllers, while I intend to use artificial neural networks- more specifically, Ken Stanleys' <a href="http://www.cs.utexas.edu/users/nn/keyword?rtneat">Neuro evolution of augmenting topologies</a>.<br /><li> <b>PostProcessing node</b>: Although Ogre3d achieves postprocessing effects using a compositor framework, I am contemplating whether it would be better/easier (for the kind of virtual environments that I am targeting with this tool) to have a node that is dedicated to postprocessing. It takes a texture (from a RenderTargetNode) as input, and applies a postprocessing effect (e.g. blur, bloom, ascii and so on) to it. The effect's shader parameters are exposed through the node GUI as animable input parameters.<br /></ul><br />Unfortunately I am also getting a bit busier with my graduate study, which means that I will have less time to spend on this project. However it looks like I will have the entire summer free to work on this project, which will definitely allow me to implement everything that is listed above.<br /><br />... to be continued ...Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-87670684029759104922011-01-17T12:16:00.000-08:002011-01-17T13:31:28.274-08:00It's alive!Although in the past few weeks I've been enjoying a holiday, they also have been quite busy. Between having a good time with my family in Louisiana/US and back in the Netherlands, I've also been working on my 3d tool. I have been cleaning up and/or connecting some loose ends: the recently developed components that I mentioned in the previous blog post, are now fully functional within the tool. Here are the mandatory screenshots:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_SbtZUFx5nrY/TTSj6b35cRI/AAAAAAAAHSI/_SQPmslGc6w/s1600/shot1.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 270px;" src="http://1.bp.blogspot.com/_SbtZUFx5nrY/TTSj6b35cRI/AAAAAAAAHSI/_SQPmslGc6w/s400/shot1.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5563251664019353874" /></a><br />This screenshot shows the timeline canvas widget with two modules (actually, nodes :-) arranged on the timeline. Modules are special nodes, as they appear on a timeline and allow timeline based compositing. Furthermore, the other new addition that is visible in this screenshot is that the performance profiling features that are provided by OGRE3D (the library used to draw graphics on the screen) are enabled. <br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_SbtZUFx5nrY/TTSj69tzwbI/AAAAAAAAHSQ/1Ey099FvNwg/s1600/shot2.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 270px;" src="http://1.bp.blogspot.com/_SbtZUFx5nrY/TTSj69tzwbI/AAAAAAAAHSQ/1Ey099FvNwg/s400/shot2.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5563251673103843762" /></a><br />This screenshot shows a few different widgets in action. First of all, double clicking on a module in the timeline brings up two other widgets: the property inspector (as shown in the first screenshot) and the node canvas widget. The node canvas widget shows an alternative representation of the selected module, and allow you to control it's input properties by adding other nodes and connecting them. In this screenshot, four input properties are controlled by curve nodes, and a fifth one is controlled by a color gradient node. Double clicking on a curve or colorcurve node again brings up two widgets: the property inspector, and a curve editor. Both curve editors are visible in the screenshot above. <br /><br />I have also been working on a few other features that are not directly visible in these screenshots. They are:<br /><ul><br /><li>Saving and loading compositions (nodes, modules, their properties, and their connections) in XML format, the human readability of XML assists with debugging purposes. At a later stage I may switch back to a binary format for loading and saving- however this depends on the resulting filesize(s). Since plain text compresses really well, it remains to be seen if there is an advantage to switching back to a binary format.<br /><li>Integrating a memory leak detector (<a href="http://vld.codeplex.com/">visual leak detector</a> [codeplex.com]) which helped with finding the last remaining memory leaks. The tool is now guaranteed to be 'leak free' :-D<br /><li>Automatic creation and utilization of a 'resource cache'. Scripts, shaders and so on are created once, and subsequently cached in a binary format. The next time that the resource is requested, the cached binary file is used instead of recompiling it.<br /></ul><br /><br />The two screenshots show how the core features of this 3d tool are more or less in place: node ~and~ time based compositing of 3d visualizations in a visual programming environment. However since I am still ironing out some of the nuts and bolts of the tool and framework. Currently there mostly are dummy nodes & modules implemented. Although they do draw something on the screen, they are nowhere near useful enough to create e.g. a demo production. Before I (or anyone else who is interested ;-) can start with implementing more useful nodes, there are a few things left to implement and/or figure out.<br /><br /><ul><br /><li>What are the 'bare minimum' of node types that are needed ? Here is a list representing my current thoughts. Most of these node types can be used 'as is'. At least some of these nodes can be subclassed for creating custom content. <br /><ul><br /><li>Scene: This node holds a 3d scene. For instance, when importing a scene from your favorite 3D design program (LightWave, 3DS, Blender), it will be represented by a Scene node. Additional content can be provided by connecting 'scenecontent' nodes.<br /><li>SceneContent: can be subclassed to provide content to be inserted in a scene. For instance, the SceneContent node can be subclassed to add a mesh to the scene and adjust or animate its properties (position, scale, rotation). Another subclass could provide a particle system. And yet another subclass could provide generative content.<br /><li>Camera: Represents a camera and all its properties, and possibly a camera path. It could be possible to connect this camera to different scenes.<br /><li>Viewport: Represents a viewport and its properties (such as dimensions). A camera can have several viewports, but a viewport is connected to a single camera.<br /><li>Render: Draws the contents of a viewport either to the screen or an offscreen rendertarget. This node can be subclassed to implement forward and deferred renderers.<br /><li>Texture: Represents a texture. Provides input pins such as (u,v) and output pins such as (rgba, r, g, b, a)<br /><li>RenderTarget: Represents a texture that can be used to render a viewport to.<br /><li>Curve: Represents an animated number value. Already implemented.<br /><li>ColorCurve: Represents an animated color value. Already implemented. However since the current implementation requires you to key the r,g,b,a values independently I may rewrite this so that you can keyframe colors instead.<br /></ul><br /><li>Some widgets are also required:<br /><ul><br /><li>Camera path editor: this can be done 'quick and dirty' by just showing an editable list of coordinates. Camera paths can be imported from your favorite 3D editor (Lightwave, 3DS, ...) and saved/loaded from the tool.<br /><li>Script editor: I want to include at least a basic text editor to edit scripts for shaders, material and so on. <br /><li>Information: displays information about the item that is currently under the mouse cursor.<br /><li>Shader editor: Eventually I want to include a node-based shader compositor. Even though this is not that much work, it is quite low priority. First I want to get all features listed above to work.<br /></ul><br /><li>And finally, to complete my transition from Windows to OSX, I want to switch tool development over to XCode. In theory the framework supports it (no Windows specific components are used anymore), but in practice I must learn a new development environment from scratch- with a project configuration that is a bit beyond the 'hello world' examples found on the internet. I will spend some time each week on making this happen, because after about two decades of using Windows and developing software for it I am extremely tired of it. Even when I am called a fanboy, but that is just because I dont really care what others think of me ;-)<br /></ul><br /><br />There has been plenty of progress, but there also is quite a lot to do! Fortunately all of the above features 'only' require a few hours of implementation each. Meaning, I do not need to make huge changes in the underlying framework code (I think... ;-). So in the coming period I will spend a few hours each week working on one feature, until all of them are done. I plan on having all of them done by the summer of 2011.<br /><br />... to be continued ...Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-67204694160250422392011-01-02T06:20:00.000-08:002011-01-02T10:31:21.085-08:00Happy new year!After a few weeks of inactivity, here is yet another report of some progress. This is the first of (at least) two major increments in functionality. Have a look a the image below, which summarizes the new features that have been integrated in the editor.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_SbtZUFx5nrY/TSC_gQZ7esI/AAAAAAAAHRc/KHy_kZZP6hs/s1600/Happy%2B2011.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 284px;" src="http://3.bp.blogspot.com/_SbtZUFx5nrY/TSC_gQZ7esI/AAAAAAAAHRc/KHy_kZZP6hs/s400/Happy%2B2011.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5557652501055634114" /></a><br /><br />This image shows the following working features:<br /><ul><br /><li>Timeline editor (top left): this window allows time-based compositing, which is accomplished by inserting and arranging modules on a timeline. These modules draw some 3d content on the screen. This window supports zoom/pan/undo/redo. All module code is plugin-based, so new effects can be added by copying a dll file and restarting the editor.<br /><li>Color curve editor (top right): this window allows editing of color gradients that can be used in the animation.<br /><li>Curve editor (bottom right): this window allows editing of curves that can be used in the animation. A spline (shown in the center graph) and a modifier (shown in the bottom graph) are combined into a final curve (shown in the top graph). By specifying parameters the behavior of both the spline and modifier can be influenced. For instance in this image the modifier is a random value; however it could also be a constant value, sawtooth, inverse sawtooth and so on.<br /></ul><br /><br />This image shows the following incomplete features:<br /><ul><br /><li>Node editor (bottom left): this window allows node-based compositing, which is accomplished by inserting nodes and making connections between them. This window supports zoom/pan/undo/redo. Example nodes are for instance: a 3d scene (imported from 3ds max, Maya, Blender), a postprocessing effect such as blur, a render node (either forward or deferred) and so on. Render nodes are also modules - this makes a mixture between timeline- and node-based compositing possible.<br /></ul><br /><br />The editor now has a more or less working skeleton for all the features that it needs in order to be able to orchestrate visualizations. However one major step remains, which is the big one. Up until now the engine only supported timeline-based compositing, with all functionality encapsulated in modules. In the next week(s) this will be updated so that node-based compositing is also supported. This will be a major update; but when it is done the editor can (finally) be used again to make some real, actual productions with!<br /><br />Happy new year!Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-67976825503334436462010-11-26T12:41:00.000-08:002010-11-27T21:59:08.449-08:00More unexpected progressWhile my better half was preparing the traditional thanksgiving meal, I made myself useful by indulging in another programming spurt and creating the graphic user interface part needed for 'node based compositing'. I am really impressed with the flexibility of the Qt programming library, and the wealth of examples out there. I started out with an open-source mindmapping program, lobotomized most of it's code, leaving only the code in place that provided a basis for what I want. Then I expanded the code to include Node based interfaces, with Pin based I/O, and added the accompanying graphics widgets for it. After a day or so of fiddling around, here is the result:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_SbtZUFx5nrY/TPHumd-dawI/AAAAAAAAHQY/Chk9emn5vTw/s1600/thirdconnection.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 239px;" src="http://2.bp.blogspot.com/_SbtZUFx5nrY/TPHumd-dawI/AAAAAAAAHQY/Chk9emn5vTw/s400/thirdconnection.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5544474960918833922" /></a><br />The following features are fully functional:<br /><ul><br /><li> Wrapping Node based interfaces in graphical widgets,<br /><li> Wrapping pin based I/O components in graphical widgets,<br /><li> Selecting, moving and deleting widgets,<br /><li> Zooming and panning the widget canvas,<br /><li> Connecting I/O pins based on their value types<br /></ul><br />The following features still need work:<br /><ul><br /><li> At the moment you can only add nodes, and not remove them.<br /><li> It is possible to make cyclic connections. This should be prohibited.<br /><li> Auto-arrangement of a node and all the nodes it connects to through all its output pins.<br /><li> Decide on whether I will keep the undo/redo feature, or just remove it. Even though this node editor supports it, it will be a lot of work to support it throughout the entire editor. I think. Or better said: I need to think a bit more about that ;-)<br /></ul><br />Although I had to spend a lot of time refactoring the existing code after I had lobotomized the mindmapping example code, I am still impressed with how quick you can get something like this up and running. In the past I had written a graphic interface like this in Java (to practice my programming skills I spent about 80 hours implementing a digital logic drawing board and processor), I guess that helped a lot with understanding what the existing code did and did not do. But again... unexpected progress!<br /><br />... to be continued.Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-32282092716576928042010-11-22T17:13:00.000-08:002010-11-25T11:22:15.306-08:00Unexpected progressThis post is quite unexpected, as is the development progress that is reported in it. In the last two weekends I had "a few" hours to spend, which of course I spent chatting up a C++ compiler. I am utterly amazed at how fast things have progressed since the previous post: at the time I had only the intentions to start rewriting the GUI of my 3d tool (using Qt this time), but now I can report some real, actual ~progress~. Below is one of those "pictures that says a thousand words".<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_SbtZUFx5nrY/TO63Neg1b1I/AAAAAAAAHPs/NK_5NVDsUoA/s1600/secondcoming.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 241px;" src="http://3.bp.blogspot.com/_SbtZUFx5nrY/TO63Neg1b1I/AAAAAAAAHPs/NK_5NVDsUoA/s400/secondcoming.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5543569633496624978" /></a><br />Basically, I have the following features working:<br /><ul><br /><li> Configuration of options, such as which render system to use (DirectX or OpenGL). This is not shown in the screenshot.<br /><li> Logging subsystem, shown in the screenshot.<br /><li> Resource inspector, this static list shows which resources (textures, shaders and so on) are currently in use.<br /><li> Parameter editor, each parameter in this list can be edited using an appropriate editing widget. So if you click on a texture parameter, you will see a list of textures. If you click on a Vector3, you can edit its 3 values. And so on, but you get the idea.<br /><li> Minor features that have been implemented: load/save dialog, playback controls, texture browser <br /></ul><br />I expected that it would take me weeks (!) to get to this point, in reality it took me 'a few' hours (meaning, I didn't get much sleep in the last two weekends). I got really lucky that I managed to find some really useful programming examples of how to "get things done" using the Qt framework :-)<br /><br />Next on the todo list is to start working on the timeline-based sequencer/editor (which allows you to place modules, that draw 3d scenes, on different layers of the timeline), followed by the 'node based compositing' part. <br /><br />(Note: an example of a timeline based sequencer can be found in my November 5th post, in the screenshots for Version 3. Here are two reference screenshots, one of node based compositing in Blender 2.5. I am aiming for something similar. The other screenshot is from ShaderFX and shows how node-based compositing can be used to replace low-level shader programming. In this way you don't need to be a programmer to create 3d animations and/or shaders)<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_SbtZUFx5nrY/TOsaflUWBeI/AAAAAAAAHPU/gISa-qmv2MM/s1600/blender_nodes.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 222px;" src="http://2.bp.blogspot.com/_SbtZUFx5nrY/TOsaflUWBeI/AAAAAAAAHPU/gISa-qmv2MM/s400/blender_nodes.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5542552896305235426" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_SbtZUFx5nrY/TOscktXNS7I/AAAAAAAAHPk/kO8ZlxH0VPw/s1600/ShaderFX_1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 178px;" src="http://1.bp.blogspot.com/_SbtZUFx5nrY/TOscktXNS7I/AAAAAAAAHPk/kO8ZlxH0VPw/s400/ShaderFX_1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5542555183387331506" /></a><br />There is a small chance that I will reverse this order, as I have some example code that may help tremendously with creating the node based compositing GUI, and I may be able to use this tool for next semester's 'principles of biological modeling' class. However, I will definitely not be able to work on this in the next 3 weeks, as I am ~swamped~ with end-of-semester work for my graduate study at Brandeis University. <br /><br />... to be continued!Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-83256563997902630462010-11-05T08:30:00.001-07:002010-11-23T08:50:02.845-08:00A brief history of my 3d editor programming efforts...I wanted to devote one post to providing an overview of the evolution of the 3d editor that I am working on. While engaged in your passion it is easy to lose track of how time passes by. While writing this post it struck me that a number of years have passed since I started working on this editor. Anyway, a rough functional distinction between the various versions can be made:<br /><ul><br /><li> Version 1: Using C++ for direct control of the DirectX API, and Microsoft Foundation Classes to create a GUI<br /><li> Version 2: Using C# to build an engine on top of managed Ogre as a rendering component, and use WinForms to create a GUI<br /><li> Version 3: Using C++ to build an engine on top of Ogre as a rendering component, and glue that to a WinForms based GUI programmed in C#<br /><li> Version 4 (in development): Extend the C++ engine to support node based compositing (besides timeline based compositing), and use Qt to create a GUI. Also, this version will be platform independent.<br /></ul><br />The remainder of this post will provide some basic information about each of the versions, accompanied with some screenshots.<br /><br /><br /><b><u>Version 1: initial experiments and the beginnings of a game editor</u></b><br /><br />It all started with some simple experiments done in C++ (using Microsoft Visual Studio 6) and the DirectX API, I think version 7 or 8 at the time. I had made a few simple programs which loaded and displayed a 3d scene, experimented with simple generation of landscape meshes and so on. When that went well, I went on to programming more elaborate routines, all of which basically were aimed at speed-optimizing rendering of a large number of triangles:<br /><ul><br /><li> Octtree based spatial subdivision, to determine which parts of a scene are on-screen and which ones are off screen<br /><li> Occlusion culling, to minimize the rendering of triangles that are further away from the camera and will be obscured by triangles closer to the camera <br /><li> Collecting all rendering operations into batches, and rendering those.<br /><li> Rendering meshes with adaptive level of detail: the further away from the camera, the less triangles will be used to display the meshes.<br /></ul><br /><br />The screenshot below shows the octtree based spatial subdivision at work:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_SbtZUFx5nrY/TNQpWWlcqtI/AAAAAAAAHOg/NVejgTKogrs/s1600/exp1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 300px; height: 224px;" src="http://4.bp.blogspot.com/_SbtZUFx5nrY/TNQpWWlcqtI/AAAAAAAAHOg/NVejgTKogrs/s400/exp1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5536095305941691090" /></a><br /><br />The screenshot below shows the occlusion culling routine at work. When the view of the city is obscured by a wall (top image) the geometry behind the wall is not rendered. As the camera rises above the wall (center images) more octtree cells containing geometry become visible. The bottom screenshot is included just as a reference to what the city model looked like.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_SbtZUFx5nrY/TNQpHpVgzGI/AAAAAAAAHOY/uY2Uh4N1-5Y/s1600/occlusion+culling.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 142px; height: 400px;" src="http://1.bp.blogspot.com/_SbtZUFx5nrY/TNQpHpVgzGI/AAAAAAAAHOY/uY2Uh4N1-5Y/s400/occlusion+culling.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5536095053277088866" /></a><br /><br />I also built a graphic user interface around these features using Microsoft Foundation Classes (MFC). Below are a few screenshots of this version of the editor... <br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_SbtZUFx5nrY/TNQjOm4tMII/AAAAAAAAHOQ/h2biWp1Br9A/s1600/v1_02.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 299px;" src="http://2.bp.blogspot.com/_SbtZUFx5nrY/TNQjOm4tMII/AAAAAAAAHOQ/h2biWp1Br9A/s400/v1_02.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5536088575808712834" /></a><br /><br />At this point in time I looked back at what I had accomplished and realized a few things:<br /><ul><br /><li> The direction that my editor was going in was more toward a game (level) editor than toward a 3d visualization editor. I did not like this direction. At all.<br /><li> Even though I thought that programming low level routines for rendering large amounts of triangles was a nice challenge, I did not want to spend my time "reinventing the wheel" as there are plenty of other solutions available. I started considering dropping in an existing open-source 3d engine to handle all the rendering aspects, so that I can focus on creating the editor that allows the workflow that I have in mind.<br /><li> Developing graphic user interface components in MFC took up a lot of time, more then was necessary. I started to consider switching to other environments that are more friendly to develop your own user interface components<br /></ul><br />Ultimately this resulted in my first complete rewrite of the editor... <br /><br /><br /><b><u>Version 2: The limitations of C# and .NET</u></b><br /><br />After some deliberation I had made a few decisions:<br /><ul><br /><li> Use existing open source graphic rendering engines. I settled on Ogre3d because of its completeness in features, documentation and examples.<br /><li>Use C# and .NET for the graphic user interface, because that is what I had experience in.<br /></ul><br /><br />Here are a few screenshots showing the graphic user interface. Notice that most of the panels (as discussed in the previous blog post) are implemented: editors for timeline, scripts, parameters, splines and so on.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_SbtZUFx5nrY/TNQt54YO2tI/AAAAAAAAHOo/xMrZZdHnDIk/s1600/v2_01.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 312px;" src="http://4.bp.blogspot.com/_SbtZUFx5nrY/TNQt54YO2tI/AAAAAAAAHOo/xMrZZdHnDIk/s400/v2_01.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5536100314354997970" /></a><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_SbtZUFx5nrY/TNQuNsGdnxI/AAAAAAAAHOw/QUcJUsEkTPY/s1600/v3_03.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 320px;" src="http://2.bp.blogspot.com/_SbtZUFx5nrY/TNQuNsGdnxI/AAAAAAAAHOw/QUcJUsEkTPY/s400/v3_03.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5536100654656626450" /></a><br /><br />The decision to use C# had its good and its bad sides. The good side is that the ideas that I developed for how to structure the engine and the GUI are more or less completely intact in the current version of the editor, and that it did not take much time to implement all of them. The bad side is that I built everything in C#, which included using a wrapper for Ogre. Let's just say that at the time I did not think straight and was not able to extrapolate the limiting consequences of that decision. The most important of which were:<br /><ul><br /><li> Because of using a wrapper for Ogre, it always took a while before updates to Ogre were supported in updates to the wrapper code.<br /><li> The wrapper only wraps Ogre, not all of the additional plugins that you may want to use (e.g. OIS for accessing keyboard, joystick and mice from your code)<br /><li> If I wanted to make changes to Ogre, I had to make changes to the wrapper code as well. <br /></ul><br />So, after a few weeks of working on this version, I labeled it "a can of worms" and started considering yet another rewrite...<br /><br /><b><u>Version 3: trying to mix things that really should not be mixed</u></b><br /><br />For some reason I came up with the idea that I could get the "best of both worlds" by using C# for creating the GUI, while using C++ for creating the engine. I would have to glue those two together using a tool called "Simplified Wrapper and Interface Generator" (SWIG). This part of the process did not take me that long. Finally I had direct access to all the goodies (plugins, examples etc) made available by the Ogre community, while creating custom GUI components was quite easy in .NET. At least that was what I thought... but more on that after the mandatory screenshots below.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_SbtZUFx5nrY/TNQwImxwGbI/AAAAAAAAHPA/9jkaO9kbo2k/s1600/v3_02.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 310px;" src="http://1.bp.blogspot.com/_SbtZUFx5nrY/TNQwImxwGbI/AAAAAAAAHPA/9jkaO9kbo2k/s400/v3_02.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5536102766351489458" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_SbtZUFx5nrY/TNQwIVXWtRI/AAAAAAAAHO4/KIQE3qRb2k4/s1600/v3_01.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 310px;" src="http://2.bp.blogspot.com/_SbtZUFx5nrY/TNQwIVXWtRI/AAAAAAAAHO4/KIQE3qRb2k4/s400/v3_01.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5536102761677370642" /></a><br /><br />While working on Version 2 and Version 3 of this editor, I had enrolled in a graduate study (M.Sc. specializing in Human Computer Interaction) at the Delft University of Technology in the Netherlands. I was lucky enough to be able to use this editor as a basis for my thesis concerning 'patient motivation in virtual reality based neurocognitive rehabilitation'. This proved to be an excellent case study to test my editor and engine. In order to create a game based rehabilitation exercise, I had to finalize or add a few bits and pieces to the engine:<br /><ul><br /><li> Saving and loading of engine states,<br /><li> Sound (using the FMOD library),<br /><li> Saving and loading of XML files, for data or configuration input and output,<br /><li> Get keyboard, mouse input to work,<br /><li> Get Wii Remote input to work, not only to use as a game controller but also used for headtracking,<br /><li> General robustness of the editor and engine, so that it runs reliably and stable on different computers.<br /></ul><br /><br />The results from this work can be found <a href="http://mmi.tudelft.nl/vret/index.php/Addressing_Patient_Motivation_In_Virtual_Reality_Based_Neurocognitive_Rehabilitation">here [tudelft.nl]</a> (this includes some screenshots of the final game, and a video that shows how patients would interact with the system using pointing and headtracking mechanisms).<br /><br />However, I realized that there are some major downsides to having the editor/engine code separated across C# and C++. The most important one is that it is quite limiting to not have direct access to engine functionality from the editor. Everything has to be handled by proxy wrapper classes (generated by SWIG). There were several options to work around this, including to generate wrapper code for Ogre myself, re-integrate MOgre, or accept the current limitations. A discussion with the author of Yaose, a script editor for Ogre, pointed me in the direction of the Qt framework for creating GUI's. In my opinion this library is more flexible than C# and WinForms for creating GUI's, it would allow direct access to the rendering engine without the need for creating wrapper code, and as a bonus, it runs on multiple platforms. Since Ogre also runs on multiple platforms, this would mean that in theory, my editor could be configured to run on many different platforms (Windows, OSX, Linux, IPhone, ... ).<br /><br /><br /><b><u>Version 4: Doing It Right(er)</u></b><br /><br />So recently I started yet another rewrite. But as with Version 3, it is a partial rewrite and partial extension of the codebase. This time I need to rewrite the GUI, but there is plenty of Qt example code available to get me started (a few days of work on the Qt version got me to the same point as a few ~weeks~ of work on C#/WinForms). Furthermore the engine codebase needs to be extended to allow node based compositing next to timeline based compositing. I am also contemplating setting the codebase up so that the engine can be separated from the underlying graphics rendering framework- so you could directly access OpenGL or DirectX instead of using Ogre as replacement middleware. This would facilitate using the editor for demoscene productions. I expect that version 4 of the editor will be operational and ready to be used in the summer of 2011.<br /><br />However, I don't have any useful screenshots to show yet... And it looks like that due to university schedule I will not be able to spend much time on programming (if any) in the next few weeks. So the next major update will have to wait until the end of the year...<br /><br />... but at least, progress is still being made! <br /><br />:-DSachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-10137064396252200992010-10-31T12:11:00.000-07:002010-10-31T12:42:49.222-07:00A blueprint for the editor...<div style="text-align:center;">... or better said, more doodling during moments of downtime!</div><br /><br />Just a quick post to make the results from another brief brainstorm session available for digital eternity. The image below provides a better overview of the individual widgets that the editor (now that I think of it, perhaps 'composer' is a more descriptive naming) must have. I added a priority to each of the widgets, as a small reminder to myself. Without these reminders I want to limit myself from "going off on coding tangents", working on fun stuff instead of what really needs to be done. It may not seem that way, but my ultimate goal is not to create this editor, but to use it to create realtime 3d environment- so that is what I really would want to be working on!<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_SbtZUFx5nrY/TM3AzszxF8I/AAAAAAAAHOI/9hb4sEJuiEU/s1600/gcog.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 212px;" src="http://3.bp.blogspot.com/_SbtZUFx5nrY/TM3AzszxF8I/AAAAAAAAHOI/9hb4sEJuiEU/s400/gcog.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5534291511542355906" /></a><br /><br />In the few spare hours that I have had available in the past week, I have been working on, and mostly finished, the following widgets:<br /><ul><br /><li> 1- The main window, with toolbar, menu and status bar<br /><li> 1- The 3d preview widget<br /><li> 1- The information widget<br /><li> 1- The logging widget<br /><li> 2- The parameter list widget<br /></ul><br />I am really pleased with how fast this got finished. Qt is a very feature complete and easy to understand environment, with plenty of examples available on the internet to jumpstart your development! Developing these features for the previous version of this tool (with a GUI made using C# and WinForms) took me quite a bit longer- mostly because I had to code all the widgets from scratch since no decent open-source examples were available. And also, trying to get the docking of widgets to work really drove me bonkers at the time. Previously this took a few weeks or months to develop, this time it took me a few hours (this includes the ability to graphically skin the widgets, if necessary).<br /><br />For all the other widgets, I have created empty placeholders:<br /><ul><br /><li> 3- Timeline editor, which can be collapsed to show only a time bar<br /><li> 3- Node editor<br /><li> 4- Curve editor<br /><li> 4- Color curve editor<br /><li> 5- Material, compositor, shader script editor<br /><li> 5- Node script editor<br /></ul><br />The timeline and node editor are the 'meat' of this program. Particularly the node editor requires some changes to be made to the existing engine code. The curve and colorcurve editor allow parameters to be animated. The material and shader script editor can be used for real-time editing- so that any changes become instantly visible in the 3d preview (I have found a very nicely done open-source CG and GLSL shader editor, which will be incorporated in this editor when the time comes). The node script editor is the 'icing on the cake'- it will allow programmers to add new effects (in a C++ like language) without the usual compilation cycle. When an effect is working properly, the node script can then be compiled into a plugin that is dynamically loaded at runtime. However, this feature will most likely not make it into the first release of this editor. <br /><br />Later this week I will post some screenshots from the previous version of this editor (the one made in C# and WinForms), for comparison's sake. That version had all widgets, except for the color curve editor and the node script editor (since it used only timeline based compositing).Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0tag:blogger.com,1999:blog-6308304860547349702.post-4840688933743433832010-10-21T12:00:00.000-07:002010-10-31T12:11:00.994-07:00First brainstorm...<div style="text-align: center;">... and first post!</div><br /><br />I've been thinking about creating a general virtual environment creator/editor for quite some time now. Since about 2003 I've been working on various incarnations of these ideas: at first on Windows using Visual Studio 6, MFC and DirectX, then using .NET and managed DirectX, followed by a mixture of .NET for the GUI and C++/Ogre3d for the rendering engine. The last version "kind of works", but I do not like the resulting workflow (in another post I will iterate what works and what does not work). Recently I have been watching presentations and seminars from sceners (mainly Smash/FLT, Navis/ASD) about their workflow, and ofcourse I have been trying out demo editors from other demogroups (e.g. Moppi, Stravaganza, Spontz, Farbrausch, Conspiracy). I decided to start a blog to help me sort out some ideas that I have been juggling around in my brain: I want to restart my coding efforts one last time; but doing it properly this time. Meaning: platform independent, using mature open-source components as much as possible, plugin based, and with a node-and-timeline-based compositing GUI. Below are my first sketches of such a tool:<div><div><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_SbtZUFx5nrY/TMCOC1jmP9I/AAAAAAAAHNY/QffZMZ0NbpM/s1600/page1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 290px; height: 400px;" src="http://4.bp.blogspot.com/_SbtZUFx5nrY/TMCOC1jmP9I/AAAAAAAAHNY/QffZMZ0NbpM/s400/page1.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5530576521798238162" /></a><br /><div>On this page you can see the initial ideas I had for the editor. The most important doodles on this page are the three views that the GUI presents: node view, timeline view, and tree view. A fourth view, the spline view, was omitted here but will be implemented/ported.</div><div><br /></div><div><a href="http://1.bp.blogspot.com/_SbtZUFx5nrY/TMCQoseI8FI/AAAAAAAAHNg/JW7rgpsmvAU/s1600/page2.jpg"><img src="http://1.bp.blogspot.com/_SbtZUFx5nrY/TMCQoseI8FI/AAAAAAAAHNg/JW7rgpsmvAU/s400/page2.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5530579371217711186" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 290px; height: 400px; " /></a></div><div>The second page shows two conceptual layouts of the tool: the one on the left shows a spline editor, timeline compositing view, timeline controller (play/pause/loop etc), a 3d preview panel, a parameter editor panel, and an information panel. The right conceptual layout shows where the node compositing view would fit in. </div><div><br /></div><div><a href="http://2.bp.blogspot.com/_SbtZUFx5nrY/TMCRR4wUs0I/AAAAAAAAHNw/9Fou1FfZoHU/s1600/page3.jpg"><img src="http://2.bp.blogspot.com/_SbtZUFx5nrY/TMCRR4wUs0I/AAAAAAAAHNw/9Fou1FfZoHU/s400/page3.jpg" border="0" alt="" id="BLOGGER_PHOTO_ID_5530580078889841474" style="display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 290px; height: 400px; " /></a></div><div>The last page shows a rough overview of the classes that must be implemented in C++. Some of these classes (Item, Parameter<t>, ParameterManager, and so on) already exist and are more or less usable (minor refactoring and optimization needed). Basically the current C++ engine component implements timeline based compositing using modules, which have parameters + parametergroups, that can be animated using splines. Compositions can be saved and loaded into a binary format (this will be extended with an option to export and import XML for debugging purpose). So the engine component 'only' needs to be extended with the 'node based compositing' part, which is what this page is all about. </t></div><div><br /></div><div>In the following days I will post some ramblings about the current tool and it's GUI, and the GUI framework that I intend to use (Qt) and some ideas on that. Over the next few weeks I will slowly start the porting process- even though I already have a development environment setup on my Shiny New Macbook Pro and did some compilation tests for the graphics rendering component used (Ogre3d), due to my daytime occupation of being a grad student, in the past few weeks I've not had any spare time to waste. Gone are the days where I could do a graduate study with less than 10 hrs of effort... Sometimes I do miss my >60hrs coding stints ;-)<div><br /></div></div></div></div>Sachahttp://www.blogger.com/profile/07374036912738220916noreply@blogger.com0