Jump to content

DOWNLOAD MODS

Are you looking for something shiny for your load order? We have many exclusive mods and resources you won't find anywhere else. Start your search now...

LEARN MODDING

Ready to try your hand at making your own mod creations? Visit the Enclave, the original ES/FO modding school, and learn the tricks of the trade from veteran modders...

JOIN THE ALLIANCE

Membership is free and registering unlocks image galleries, project hosting, live chat, unlimited downloads, & more...

3D HTML5/Javascript Game


DaMage
 Share

Recommended Posts

Now its time for a chat about game's animation systems, of which I have started working on.
 
As you would know from modding NPCs and other such do not use a static model for them, they have a model that is rigged up (sometimes called skinning) to a skeleton that is in turned animated through rotations and translations to make the model move ina beleiveable way. Each vertex in the model is rigged to one or more bones in the skeleton with percentage descripting how much that bone can effect the movement of that vertex. This is used to make joints and such in high poly models look good.
 
In my system? well I have quite a few limitations with the systems I am using. For starters I have been exporting my model from blender as .obj, which has 0 support for rigged models or skeletons. What it does have though is something called 'OBJ groups' which I have used to assign vertices to bones....it has the limitation of only letting me assign a vertex to a single bone. While this would ba pain normally, since I can only use low poly models anyway, this limitation doesn't really effect me, and by careful placement of my skeleton bones, I can make a convincing joint anyway.
 
So now I have a way to rig my models to bones....but no way for me to get the skeleton I created in blender out and into a format my engine can read (aka, a JSON file). Solution? hand writing the skeleton file....yes this si tedious and error-prone, but since most skeletons doesnt have a ton of bones in them and you dont tend to make many, this is a decent solution for now. And anyway, this is exsactly how I had to build my skeletons in OBlivion when modding, so I have the process down pat now.
 
The next step to to then export an animation from blender and get that into my engine, that is still a ways off yet, as I just spent 3 days doing quite a bit of file parsing for the skeleton, so I dont want ot jump into file parsing for the animation format. I'll talk about animation in a later post.
 
I want to further expand on and record my technique for rigging my models from obj format. In order to get the vertices into group, I need to seperate them into different object in blender then export, but then leads to duplicated vertices, each one rigged to a different bone....anyone with 3D modelling experience will notice this would amke huge seams at every joint...and you are right.
 
My solution is very brute force, and I had to work the entire thing out in commenting before i even started trying to code it. I record all the vertices as normal, afterwards then I go through and detect all the duplicates and find whatever vertex from the duplicates has the bone with the lowest depth (meaning, fewest parent bones to the root bone). This is stored as the true bone rigging. After this the duplciates are deleted, and any faces that were pointing to a duplicate vertex point to the single first version of it, which now has the correct rigging. In C++ (which my obj to json file converter is written in) it is a nightmare of heavily commented loops to do all this.
 
--------
 
Lets talk about 3D modelling for a moment, I'm no beginner to creating models, but I am also no artist, you nca expect to see quite a few bad models once I really get developing, and my first person is no exception. To test all this I needed a character rigged up and ready to render. So I was forced to create one. I will say I think I did fairly well on the body, but I really struggled with the head and the hands...most likeley will have to revisit them at some point as they are ugly at the moment.
 
The final vertex count for my character is also only about 250ish which is really nice on the engine, even with the extra computing to do the animaitons. At the moment I just have a couple of bone hardcoded to move using sin and cos waves, so he just flaps his arms around and twists to let me see if everything is moving correctly.
 
On the rendering side of things though it really does show I might need to invest in some sort of soft light approach, as flat faces on a character looks aweful, and I see why games quickly moved to phong lighting.
 
--------
 
Lastly I must mention, completely by accident I discovered that by multiplying my transformation matrices backwards I could createa  single stack matrix to multiply my vertices with, rather then going through and multiplying my vertex by each one on the stack. Apparently this was a bit of common matrix knowledge I had compeletely missed/forgotten.....anyway, doesnt effect FPS at the top level (there is a new bottleneck somewhere that I need to find), but certainly speeds up that section quite a bit.
 
--------
 
Picture!
My character model being bent by the sin/cos animation I'm using for testing.
post-108-0-20087200-1382970495_thumb.png
 
 
EDIT: There is a a single column of pixels on the far right of my canvas that my 3D scene is not be drawn onto.....I dont when it started but I noticed it tongiht and it's really starting to annoy me.....I will have to go into detective mode tomorrow....most likely a greater than or equal to mistake.
Link to comment
Share on other sites

  • 3 weeks later...

This will be just be a short update.

 

Remember how in the last post I was complaining about how the 'flat shading' of the faces look horrid, well tonight I finished working on my new lighting model. Before I would calculate the normal of a face and that would be used to get the lighting, then by finding the distance between each vertex and each light I would record how ‘bright’ each vertex was. In order to do some smooth shading instead of calculating the normal for a face, I use a stored one for each vertex.

 

Using this, I do the distance from lights and diffuse lighting check (diffuse is where you compare the normal to the light location and see how directly a vertex/face is pointing towards a light surface) on each vertex and store how bright it is. Despite having to move my lighting code to much earlier in my pipeline, it doesn’t seem to have taken too big a whack to the FPS.

 

I did a quick side by side of the difference between the flat shading and the smooth shading code. The light source is the camera in this picture.

post-108-0-15583300-1384605908_thumb.jpg

 

 

Of course, object can still have the faces set to be flat, thats all done by smoothing group in blender and such.

 

--------------------------------------------

So this project is now going to become my summer job for the next 3 months, after which I hopefully should have at least something playable from it. Since what I am doing now if becoming much more about movement rather than pictures, I intend to start up a development vlog using youtube which could also get some more eyes on it.

 

Despite that, I’ll continue to post here with smaller stuff like what I have put up today.

Link to comment
Share on other sites

Cool Mage. Question ... When you talk about the "difuse lighting check" , to see how directly a vertex is pointing toward a light... how are verrtices oriented? I understand that the normal would be what is pointing toward the light but, how is the normal of a vertex placed in the first place? I think I have some idea , but.... I get confuselated just trying to word it out.

Link to comment
Share on other sites

Cool Mage. Question ... When you talk about the "difuse lighting check" , to see how directly a vertex is pointing toward a light... how are verrtices oriented? I understand that the normal would be what is pointing toward the light but, how is the normal of a vertex placed in the first place? I think I have some idea , but.... I get confuselated just trying to word it out.

So you understand how a face has a normal, in that it points outwards in the direction that the face is .... facing. In order to find a vertex normal (well, at least one that can be used for smoothing) you take a vertex, then make an average normal by combining all the normals of each face that vertex is apart of.

Link to comment
Share on other sites

Yup ...that`s basically what I was thinking but...with the vert normal being placed at the center of the convergance of the edges which connect with all the other verts.. keyword (that I couldn`t think of before)  being...convergance.

 

 I figure that a face normal is the "center" determined by the verts defining that face?

 EDIT: Ah...so...by averging the faces...it`s easier than what I was thinking of edges. But a face requires 3 or 4 verts to define it..? Only 2  of which must be connected to the origin ( the vert in question) for a single face ... so ... 2 on the X axis...2 on the Y ...and so on .. the vert normal would then be placed at the convergance of the center of the faces / edges in all 3 axes?
 

Edited by donnato
Link to comment
Share on other sites

Yup ...that`s basically what I was thinking but...with the vert normal being placed at the center of the convergance of the edges which connect with all the other verts.. keyword (that I couldn`t think of before)  being...convergance.

 

 I figure that a face normal is the "center" determined by the verts defining that face?

 EDIT: Ah...so...by averging the faces...it`s easier than what I was thinking of edges. But a face requires 3 or 4 verts to define it..? Only 2  of which must be connected to the origin ( the vert in question) for a single face ... so ... 2 on the X axis...2 on the Y ...and so on .. the vert normal would then be placed at the convergance of the center of the faces / edges in all 3 axes?

 

Well actually, a face is 3 vertices (a vertex is a point with location 3 values, XYZ), no matter what (except for some engines that use quads....but those are weird, easier to break quads into two triangles). So yes, you find a face's normal doing a cross product using the three vertices that make it up.....in a program like blender it shows the face normal point out from the center of the face. Since you then know the normal for each face, you can figure out the normal for each vertex. Unlike a vertex, a normal does not have 'location' it represents a normalised vector (normalised means it has a length of 1) of what direction the normal is going.

 

3D models do not actually store the face normal, since it is really simple to calculate, they do however, store the vertex normals, since they are much harder (since a vertex can be part of any number of faces).

Link to comment
Share on other sites

I chewed on this for a bit and...I fully understand now how the vertex normal is (oops brain fart)  .. extrapolated.  ... Jeez dude ... no wonder there aren`t people making engins.  So...the face normal is generated 90 degrees to the plane of the face ... at the convergance of the angles of the defining verts? Cool. 

Link to comment
Share on other sites

You mention quads and weird in one sentence - but for anything organic quads is the choice you would like to go for. The extra vertex compared to triangles makes moving and bending the mesh a lot smoother including the attached textures.

 

I see the point for an engine to reduce number of vertexes to the absolute necessary - so using triangles for the majority of architecture and static stuff is the way to go. But since you introduced your 'person' dont you think you might need to consider the usage of quads as well?

Link to comment
Share on other sites

You mention quads and weird in one sentence - but for anything organic quads is the choice you would like to go for. The extra vertex compared to triangles makes moving and bending the mesh a lot smoother including the attached textures.

 

I see the point for an engine to reduce number of vertexes to the absolute necessary - so using triangles for the majority of architecture and static stuff is the way to go. But since you introduced your 'person' dont you think you might need to consider the usage of quads as well?

Now quads are a nice idea, but in rendering you can only render a flat face, I'd need diagrams to explain why. 
post-108-0-67059100-1384784891_thumb.png
 
So here is a quad in blender, i outlined the quad in red, and moved the oppersite corners of the quad up in the air to form a quad that is not flat. Blender does it's best to figure out how it should break up that quad for render (which is the blue line) hence you end up with two triangles instead of the quad you originally had....but the blender renderer is deciding how the quad is split, because in that case, there are two valid ways to divide up the quad into triangles. 
 
Now a renderer can interpret that rendering a 'curve' (curves dont exist in computers, mainly because pi is not decimal, curves are just lines with lots of segments), but its not something I want to program, but many non-game engines do do such things.
 
That's not to say quads aren't used in my models, its just I break them down into triangles to make sure I get the correct edge for that face. Some renderers (and Im talking OpenGL and DirectX level graphics rendering here) have a scanline for doing both triangles and flat quads as it is cheaper to have a dedicated quad drawer then to break them into triangles), but by doing so you just amplify the complexities. It much easier to make sure your models just use triangles and take the slight speed hit.
 
 
 
I'll just run through quickly how quad differ from triangles at the 2d level (the above stuff applies at the 3D level). So once you have your points in the 2D screen co-ordinates, you have a shape. It can have upwards of some thing like 6 points on it depending on how it clips with the edge of the screen. You then take this shape and break it down into triangles (I use a technique called 'smart triangles', which I realise now is overkill as my shapes are all concave...I must look into that), and then for each triangle you find the left and right edges of the triangle, these are stored and then you go down the left and right edges filling in all the pixels between them.
 
The main difference with quads is the finding of the left and right edges, with triangles its super easy, you have a top point, then use simple checks to find which edge is left and which is right.....quad are harder, as with more point there are more combinations, the left edge could be A - B - C and the right edge A - D, or any combination in between.
 
 
---------------------------------------------
 
Bonus Picture: (terrible, terrible land model)
post-108-0-82682700-1384785413_thumb.png
 
Link to comment
Share on other sites

I am now at the point in the project where I need STRUCTURE and large data stores, mainly because I am rendering quite a lot of things (object, lights, camera, animations etc) and its turning into a mess of gibberish numbers. This is fine for testing, but is will not scale and even having 5 object in the screen is a nightmare to position everything correctly…..no, what I need is an editor….as Construction Kit if you will. The crux point for this is how I want to implement collision, even the basic collision I want to implement is too complex for me to manually create, hence if I need to build a program to design that in, why not expand it to cover everything now.

 

I’ve known this for a while, so planning for this has taken place in my head for ages and the editor will not be JavaScript base….why is that? Mainly because an editor often has to render much more than a game scene, and lag during development helps no-one. That’s why the editor will be a C++ program that will produce a game file that the JavaScript engine understands, a bit like Oblivion’s plugin master files. So C++, that means I’m free of my shackles! OpenGL is available and I have an engine built already that works with OpenGL…..yes and no. I like OpenGL, on a serious C++ game project it is what I would use, but for the purpose of an editor, with lots of mouse interaction on the 3D scene itself, I just don’t know how to do it with OpenGL. I could learn, but where is the fun in that? Let’s do this the Mage way, let’s convert the JavaScript renderer into C++ and have some fun.

 

So the last time I had to build a 3D render from scratch was during an introductory assignment to Graphics Rendering in a university assignment. At the time I was fresh both to this theory, and to C++ as a language, you can imagine that the end result was pretty poor and suffered from all the problems I encounter during the JavaScript build. It also struggled to render 4 of those apples in a black scene…yeah…it was baaaaaad.

 

This time I have quite a lot going for me, I have a working implementation in JavaScript, which is similar enough to C++ that I can copy large amounts of logic across with little conversion. C++ runs are a significant speed boost to JavaScript in your web browser, meaning if I can make it run in JS, it will run in C++. I also have over a years’ experience in C++ now, meaning the memory problems that slowed my program before will be avoided. This way I can avoid having to learn the OpenGL specific way for selecting objects, drawing 2D outline/wireframes over top of my scenes etc etc and just implement them my own way. It also means the editor render will look really similar to the JS render, which is important when designing.

 

This editor creation and integration with the JS game engine will most likely take up a large chunk of time now, so I wouldn’t expect too much for the next few weeks. After which game engine work can begin and it starts turning into something that can be played.

 

UNTIL THEN, feel free to ask any questions related to anything to do with rendering, game engines or anything related to what I’ve said. This stuff is my jam, I really enjoy thinking and talking about it.

Link to comment
Share on other sites

  • 2 weeks later...

ugh, code copying is very dull its just doing nothing but debugging all day with no invention side to play with....but it does have the side effect that I am going over the entire JavaScript rendering process again and that does turn up interesting things.

 

Today I was copying over code that had to do with the zbuffer, 3D clipping and culling, and lighting and I found something really interesting. I was calculating the all the lighting values for vertex in the scene including all those that were not on screen, like those behind me. This is very bad as it is a waste of CPU time to do this on values that aren’t going to be used. It all stemmed from a logic choice to put both the 3D Culling and 3D clipping code in a function together outside the massive render loop.

 

I had original broken off this section of code as it is very long and very complicated as it was all about looking at and modifying vertices in 3D space against various planes, called the viewing frustum. I’m not going to go into it only because I have a basic understanding and it does my head in with the amount of maths in it. The important thing is though, the 3D Culling is a fast method for excluding faces that are not on the screen, something should happen before lighting, but the 3D Clipping requires vertices to have all their world information, which includes lighting information.

 

The solution is simple, break them into two and do the lighting in-between, which is what I did and with 15,000 vertices and 3 light sources I gained about 5 FPS, which in my search for more FPS, is really nice.

 

To do with the stuff I was copying, I also had to muck with the scanline function, which if you remember is the single slowest piece of code in my engine. I found a piece of incorrect maths in it, I had several if statements to deal with an annoying divide by 0 error if you only drew a single pixel, when in fact that 0 should never happen, as the math was supposed to have a +1 on the end. While it was only in setup code and not the scanline loop, it gained me a 1 FPS boost and also fixed a graphical error I had.

 

 

Anyway, that’s all for now, the Editor is coming along slowly and after some initial problems to do with making it multithreaded the code conversion is working well. Also in progress in my head is a cheap collision model that will most likely get a discussion later on.

Link to comment
Share on other sites

Well this is a problem. I got the editor to a point where it can render objects without textures....and it sucks, after going through and optimising a bit the best I can get is 45FPS, compared to the 120FPS the same scene gets in the JavaScript version. For the first time in a while I am stumped....this was not meant to happen since C++ is much faster then JavaScript......I'm not quite sure where to go from here now, do I rewrite large sections of code to try and make it faster, do I turn to an OpenGL implementation to take the drawings away from my control or do I give up on a C++ editor completely....

 

This is really annoying.

 

--------------------------------------------

After a day of considering my options, and nine holes of golf, I have decided to move this editor to openGL. Hopefully I can match the rendering style of the web engine and figure out how to do the click functionality with it, but I know for a fact it will render anything I throw at it and if I just tried to optimise my code there is a good chance i waste a week and get nowhere.

 

 

So yay, now I get to chuck a very large segment of the work I have been doing away  :down:

Link to comment
Share on other sites

No...you step away... sleep on it ... distance yourself for a bit... you`re too close ... maybe to a breakthrough... Back away and let it come to you.

That's what today was, I discovered this last night shortly after I wrote the last update, I spent a few hours this morning benchmarking and trying to find where the slowdowns were, and basically every single step of the rendering pipeline was causing major slowdowns. It would be a huge job to overhaul it all to get it faster, and then at the end there is a good chance I would not have something useable. I figure I'm only throwing away a week's worth of work if I get out now, rather then 2 weeks if optimising didnt work. Basically I amde an assumption that code from JavaScript would run faster in C++, but it turns out that is not the case, and I simply dont have the C++ experience to optimize it yet.

 

Previously I have worked with C++, SDL2 and OpenGL so I have the code ready to do that. I know it works well and I'll just have to make do with it. I had my reasons to stay away from openGL with things, but speed easily outweighs them.

Link to comment
Share on other sites

After much stuffing around getting the code to work (how come copy-pasting code never seems to work first time?) I managed to get the scene rendering in a similar way to where I got up to in the previous attempt, only of course with opengl doing all the drawing, its running at about 5500FPS (yes, thats 5.5K). Hopefully with some additional work over the next few days I'll have the rendering doing exactly what I want, then I'll finally be about to get onto the fun stuff, looks like OpenGL is the way to go for the editor.

Link to comment
Share on other sites

Time for me to muse again.

 

I have rendering down pat now, the scene renders quickly and stuff appear more or less similar to how the web engine works, though there are some quirks. First off the FOV value gives a different result to the web version, despite making the same perspective matrix…..it’s not a big deal right now, but one of them is wrong, and I’m guessing it’s my makeshift code in the web engine. Another interesting quirk is that the Editor version is much brighter, and the colours don’t mend together as well with the lighting….but its close enough for the editor has to do….if I remember correctly, the Oblivion CS never quite looked the same as in-game either.

 

I’m now at the part where I need controls, and this is much more complex then I first thought…..moving objects in a 3D space, with only a 2D viewpoint is very tricky mathematically. I was able to quickly create a movement system for the camera….which only has one bit of fudgy maths in it (basically I guessed a number and it look good first try). Rotating and scaling an object within the scene also pretty easy…but the hard bit? Figuring out what object in the scene you click on and then how to make it follow the mouse along the XZ plane when dragging it. I am basically basing the controls off that of the Oblivion CS, mainly because I am just so used to those and they work better for level design that that of a 3D program.

 

Selection is the first issue to tackle, and I was surprised to find there is actually no built in store for what objects renders where in OpenGL, the only way to do it is a hacky workaround…..I guess this is not a common problem? Whatever. Basically you re-render the scene, but with a few important differences, instead of the whole screen, you only render the one pixel you clicked on, then every object in the scene is rendered as a solid unique colour…..whatever colour the pixel was that you render is the ID of the object you clicked on. This render all happens behind the scenes and is never put the screen so you never see it. Great, now I can select objects. Seriously, this ‘work around’ technique is actually listed in the OpenGL wiki tutorial pages. Why there is not a simple buffer for storing buffer IDs I don’t know.

 

But moving an object on the XZ plane? Oh dear….that complex. Because of perspective view, the movement of the object in regards to one pixel close to the camera, is much less than if the object was far away…it is completely non-linear…all due to perspective. This difference however is also affected by the rotation upward/downward of the camera, as this effects how much of the plane you see. What a pain. This is not a trivial problem, but it is a very important control, when you are moving an object it has to go where the mouse is…else it doesn’t feel right….

 

I have spent a day or some doing some serious thinking and some web research and I the best solution I can come with is as follows. I know where the camera is and I know what object I have selected, so if I create a 3D line from the camera point, going through the pixel I click on, I can use it to find the intersect point on the XZ plane that the object sits on. Using that intersect point I measure how far behind the object it is, so that now I know the offset that object should be from my 3D line. As I move the mouse I draw a new line each frame and find the intersect point of this new line on the same plane, applying the offset I can then reposition the object.
 

At the moment this is still theory in my head, but the idea seems sound, and really much of this post was just me getting it down in writing so that I could make logical code from it.

 

Anyway, here is quick pick...it not much, but its the same level model from the engine, and having been selected in the window.

post-108-0-91958800-1386854000_thumb.png

 

------------------------------------------------------------------------

 

Moving forward, well now I’m getting into the guts of the editor, I want to create a way to transfer objects into the scene with drop and drag from another window, then I get to figure out how level design will work…...but that’s a whole other post that will tie in with the collision structure.

 

Link to comment
Share on other sites

I found what i had done wrong to put my perspectives out of line...basically it was an error caused by me not following my university notes correctly. When you convert the perspective coordinates to window coordinates, you need to use a scaling matrix that divides it by 2, I forgot the scaling matrix and it basically caused by screen to be 'zoomed in'.

 

 

I also appear to have been overthinking what I wanted in the previous post.....I played around with the CS and noticed that objects do not track directly with the mouse, I must be thinking of a blender thing. It would appear, doing a simple ratio (like with the camera) with a check against the ZBuffer to see how zoomed out you are should be enough to make the movement usable. More zoomed out, the further it moves.

 

EDIT: Upon implementing, yes that is indeed how they did it. Ignore previous post, complex math avoided in favour of more hacky, but easy to understand way of moving objects.

Link to comment
Share on other sites

A quick update before I move onto today topic. Last night I finished implementing the important controls for the render window, I can move/rotate the camera and also click on and move/rotate/scale any object in the scene. I also made it so multiple objects could be in the scene at once so that I could test these feature. It needs fine tuning and some more advanced controls, but for now it is good enough for me to navigate and modify the scene.

 

----------------------------------------------------------------------------------------------------------

 

Hey, let’s finally move away from all this rendering stuff and talk about some game design….most specifically, the level design and collision.

 

In 3D games I know of few ways that level design is implemented, that is, how 3D models make up the actual level that you can see. This has a pretty huge bearing on what kind of collision you program as well, and even later on how you implement AI so that characters can navigate the level. I’m going to outline each one.

 

3D Model:

You see this in big budget pieces, such as FPS games, where the level is made up of a number of very large static models. This allows the developers to restrict exactly how many vertices are on the screen, as the entire level is made and put together in Blender or 3DS Max. It also means the level can look really nice and flow without jarring bits. This technique also often goes along with a complicated physics model, and the collision with the level is done on a face-by-face check. Unless you are using a premade engine like unreal or unity, this is well out of a indie’s reach.

 

Tilesets:

As Bethesda modders, we know this technique pretty well. A level is made up of a number of 3D models that are mashed together in editor to form levels. This technique allows designers to create many levels very easily, at the cost that levels can end up looking a bit samey. I feel this technique is really great for any sort of open world or adventure game, as it means you can quickly make your levels once you have the initial tileset complete. When it comes to collision this way it also more flexiable, but most of the time you want to do the proper face-by-face checking as your tilesets can have complex faces.

 

Vector Walls:

This technique goes way back to the nineties, and the best example I know of is DOOM. You define all the walls of your level by using lines, space inside the walls is floor, space outside is unreachable. Then for each floor segment you can define the floor height and the ceiling height allowing for steps and more dynamic room heights. You can apply textures to the walls, floors and ceiling and you have your rooms. Once complete, you package the whole thing into a 3D model for rendering, which means it is much closer to the 3D model technique then a tileset. The collision model for this is dead simple, with a bit of computer generated triangle creation, you basically get a triangle check collision model which is stupidly fast.

 

Bitmap Levels:

This technique harks back to the 2D/2.5D days of gaming, but it is so simple that it is still in use today. The dead giveaway for this model is straight lines and only corners of 90 degrees, but the collision is even easier than the Vector Walls. The level is built from a bitmap image, where it each pixel represents a piece of the level, with the colour of the pixel having significance, such as what piece of a tileset is used there (but that various). As I said, still used by indie games today due to it’s simplicity, the fact you could use MS Paint to create a level and in very little time. It’s main drawback is that it sucks at anything not dungeon like.

 

Heightmaps:

This is not a level design technique by itself, but rather it is used in combination with  the techniques above in order to create outside terrain. The way I’ve always implemented heightmaps is that you take a bitmap where each pixel represents a vertex of the terrain, its position in the bitmap gives you the XZ value, and the colour gives you the Y (height) value. You convert this to a 3D model and also use it for collision checking as it is a simple check against a single triangle.

 

So what am I looking to use for my engine, well I think the Vector Walls is the best way to go, simply because it means I can make interesting looking levels, but also have fast collision (which is important with the web side of things). Since I want to also have outside areas, I will also be using the heightmaps for outside areas, with a combined vector walls for the collision.

 

The big downside to this technique, is that it does require me to build a way in my editor to build these, most likely as a toggle with the current window I have to switch between the wall/collision editor and the real render….or even as a second window where changing the walls changes right away in the real render.

Link to comment
Share on other sites

I wondered about your XZ movement referances ... I`m used to the Z being vertical . X+Y being planar ...but I have used cad programs where Y is the vertical soo ... you just clarified, for me, your referance. Thank you.  Heightmaps... Very interesting also. I agree about your choice of using "Vector Walls"  to build your levels. Based on your explaination... Good luck on that last paragraph... :rofl: I understand though...somehow.

Great add, and I`m happy to see this progress. :pints:

Link to comment
Share on other sites

I wondered about your XZ movement referances ... I`m used to the Z being vertical . X+Y being planar ...but I have used cad programs where Y is the vertical soo ... you just clarified, for me, your referance. Thank you.  Heightmaps... Very interesting also. I agree about your choice of using "Vector Walls"  to build your levels. Based on your explaination... Good luck on that last paragraph... :rofl: I understand though...somehow.

Great add, and I`m happy to see this progress. :pints:

XZY  (y up) and XYZ (z up) are really just personal preference, you just need to be consistent with whatever you use. OpenGl uses XZY, where at the default camera position you look down negative, x goes from left to right and y is straight up.....I believe in minecraft X and Z are reversed, which is really weird.

 

That last bit will make much more sense once I have some of it implemented and I can show a picture.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...