Jump to content

DOWNLOAD MODS

Are you looking for something shiny for your load order? We have many exclusive mods and resources you won't find anywhere else. Start your search now...

LEARN MODDING

Ready to try your hand at making your own mod creations? Visit the Enclave, the original ES/FO modding school, and learn the tricks of the trade from veteran modders...

JOIN THE ALLIANCE

Membership is free and registering unlocks image galleries, project hosting, live chat, unlimited downloads, & more...

DaMage

Moderators
  • Posts

    1,293
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by DaMage

  1. So I have discovered that an algorithm call Smart Triangles that I use for converting a concave polygon into triangle does not work in the way I thought it did, and the code I was using for it only works 70% or the time. For simple polygons of 5 points or or less it is fine, but once I get 10s of points forming a polygon, it freaks out and creates incorrect triangles.....oh dear. After reviewing my materials I found where the error was....but boy is it going to be a pain to fix...basically I was cheating my way through it, when I found a triangle that didnt fit, I would simply skip it for a bit until I had done more triangles and hence maybe later on it would fit.....which works...but not for polygons with a ton of points. Again, this is just a bit of really old code that I just accepted as working, when it really didn't. What you are supposed to do is when you find a triangle that doesn't fit (which means there are other points of the polygon within the triangle) you are to divide the polygon into two smaller polygons, where the dividing line is drawn between the your starting point, and the leftmost of the point that are within the triangle....You then apply the algorithm to both of these polygons, and if you have done programming before you'll notice this would be done through a recursive function.....which I hate using. A recursive function is a function that calls itself, eventually it hits some point in it's recursion and finishes, going back up the chain completing each function that was called. Think of a stack of plates, where every plate put on the stack is an instance of the function calling itself, once a function finishes it gets taken off the stack. That way once the function at the top finishes, it allows the one under it to finish and so forth until you complete them all. Its a part of programming that I always struggle to work with as it is really hard to keep track of in your head what is going on and what values you are passing up and down the function. Just thought I would share this bit, more just at my annoyance at having a problem like this come up.
  2. Wow, a video! Was testing out the screen capture software I need so I made quick demo of how the wall editor works in terms of creating a 3D model....its still very early days with it though. http://www.youtube.com/watch?v=4ekoV1k06ig
  3. XZY (y up) and XYZ (z up) are really just personal preference, you just need to be consistent with whatever you use. OpenGl uses XZY, where at the default camera position you look down negative, x goes from left to right and y is straight up.....I believe in minecraft X and Z are reversed, which is really weird. That last bit will make much more sense once I have some of it implemented and I can show a picture.
  4. A quick update before I move onto today topic. Last night I finished implementing the important controls for the render window, I can move/rotate the camera and also click on and move/rotate/scale any object in the scene. I also made it so multiple objects could be in the scene at once so that I could test these feature. It needs fine tuning and some more advanced controls, but for now it is good enough for me to navigate and modify the scene. ---------------------------------------------------------------------------------------------------------- Hey, let’s finally move away from all this rendering stuff and talk about some game design….most specifically, the level design and collision. In 3D games I know of few ways that level design is implemented, that is, how 3D models make up the actual level that you can see. This has a pretty huge bearing on what kind of collision you program as well, and even later on how you implement AI so that characters can navigate the level. I’m going to outline each one. 3D Model: You see this in big budget pieces, such as FPS games, where the level is made up of a number of very large static models. This allows the developers to restrict exactly how many vertices are on the screen, as the entire level is made and put together in Blender or 3DS Max. It also means the level can look really nice and flow without jarring bits. This technique also often goes along with a complicated physics model, and the collision with the level is done on a face-by-face check. Unless you are using a premade engine like unreal or unity, this is well out of a indie’s reach. Tilesets: As Bethesda modders, we know this technique pretty well. A level is made up of a number of 3D models that are mashed together in editor to form levels. This technique allows designers to create many levels very easily, at the cost that levels can end up looking a bit samey. I feel this technique is really great for any sort of open world or adventure game, as it means you can quickly make your levels once you have the initial tileset complete. When it comes to collision this way it also more flexiable, but most of the time you want to do the proper face-by-face checking as your tilesets can have complex faces. Vector Walls: This technique goes way back to the nineties, and the best example I know of is DOOM. You define all the walls of your level by using lines, space inside the walls is floor, space outside is unreachable. Then for each floor segment you can define the floor height and the ceiling height allowing for steps and more dynamic room heights. You can apply textures to the walls, floors and ceiling and you have your rooms. Once complete, you package the whole thing into a 3D model for rendering, which means it is much closer to the 3D model technique then a tileset. The collision model for this is dead simple, with a bit of computer generated triangle creation, you basically get a triangle check collision model which is stupidly fast. Bitmap Levels: This technique harks back to the 2D/2.5D days of gaming, but it is so simple that it is still in use today. The dead giveaway for this model is straight lines and only corners of 90 degrees, but the collision is even easier than the Vector Walls. The level is built from a bitmap image, where it each pixel represents a piece of the level, with the colour of the pixel having significance, such as what piece of a tileset is used there (but that various). As I said, still used by indie games today due to it’s simplicity, the fact you could use MS Paint to create a level and in very little time. It’s main drawback is that it sucks at anything not dungeon like. Heightmaps: This is not a level design technique by itself, but rather it is used in combination with the techniques above in order to create outside terrain. The way I’ve always implemented heightmaps is that you take a bitmap where each pixel represents a vertex of the terrain, its position in the bitmap gives you the XZ value, and the colour gives you the Y (height) value. You convert this to a 3D model and also use it for collision checking as it is a simple check against a single triangle. So what am I looking to use for my engine, well I think the Vector Walls is the best way to go, simply because it means I can make interesting looking levels, but also have fast collision (which is important with the web side of things). Since I want to also have outside areas, I will also be using the heightmaps for outside areas, with a combined vector walls for the collision. The big downside to this technique, is that it does require me to build a way in my editor to build these, most likely as a toggle with the current window I have to switch between the wall/collision editor and the real render….or even as a second window where changing the walls changes right away in the real render.
  5. I found what i had done wrong to put my perspectives out of line...basically it was an error caused by me not following my university notes correctly. When you convert the perspective coordinates to window coordinates, you need to use a scaling matrix that divides it by 2, I forgot the scaling matrix and it basically caused by screen to be 'zoomed in'. I also appear to have been overthinking what I wanted in the previous post.....I played around with the CS and noticed that objects do not track directly with the mouse, I must be thinking of a blender thing. It would appear, doing a simple ratio (like with the camera) with a check against the ZBuffer to see how zoomed out you are should be enough to make the movement usable. More zoomed out, the further it moves. EDIT: Upon implementing, yes that is indeed how they did it. Ignore previous post, complex math avoided in favour of more hacky, but easy to understand way of moving objects.
  6. Time for me to muse again. I have rendering down pat now, the scene renders quickly and stuff appear more or less similar to how the web engine works, though there are some quirks. First off the FOV value gives a different result to the web version, despite making the same perspective matrix…..it’s not a big deal right now, but one of them is wrong, and I’m guessing it’s my makeshift code in the web engine. Another interesting quirk is that the Editor version is much brighter, and the colours don’t mend together as well with the lighting….but its close enough for the editor has to do….if I remember correctly, the Oblivion CS never quite looked the same as in-game either. I’m now at the part where I need controls, and this is much more complex then I first thought…..moving objects in a 3D space, with only a 2D viewpoint is very tricky mathematically. I was able to quickly create a movement system for the camera….which only has one bit of fudgy maths in it (basically I guessed a number and it look good first try). Rotating and scaling an object within the scene also pretty easy…but the hard bit? Figuring out what object in the scene you click on and then how to make it follow the mouse along the XZ plane when dragging it. I am basically basing the controls off that of the Oblivion CS, mainly because I am just so used to those and they work better for level design that that of a 3D program. Selection is the first issue to tackle, and I was surprised to find there is actually no built in store for what objects renders where in OpenGL, the only way to do it is a hacky workaround…..I guess this is not a common problem? Whatever. Basically you re-render the scene, but with a few important differences, instead of the whole screen, you only render the one pixel you clicked on, then every object in the scene is rendered as a solid unique colour…..whatever colour the pixel was that you render is the ID of the object you clicked on. This render all happens behind the scenes and is never put the screen so you never see it. Great, now I can select objects. Seriously, this ‘work around’ technique is actually listed in the OpenGL wiki tutorial pages. Why there is not a simple buffer for storing buffer IDs I don’t know. But moving an object on the XZ plane? Oh dear….that complex. Because of perspective view, the movement of the object in regards to one pixel close to the camera, is much less than if the object was far away…it is completely non-linear…all due to perspective. This difference however is also affected by the rotation upward/downward of the camera, as this effects how much of the plane you see. What a pain. This is not a trivial problem, but it is a very important control, when you are moving an object it has to go where the mouse is…else it doesn’t feel right…. I have spent a day or some doing some serious thinking and some web research and I the best solution I can come with is as follows. I know where the camera is and I know what object I have selected, so if I create a 3D line from the camera point, going through the pixel I click on, I can use it to find the intersect point on the XZ plane that the object sits on. Using that intersect point I measure how far behind the object it is, so that now I know the offset that object should be from my 3D line. As I move the mouse I draw a new line each frame and find the intersect point of this new line on the same plane, applying the offset I can then reposition the object. At the moment this is still theory in my head, but the idea seems sound, and really much of this post was just me getting it down in writing so that I could make logical code from it. Anyway, here is quick pick...it not much, but its the same level model from the engine, and having been selected in the window. ------------------------------------------------------------------------ Moving forward, well now I’m getting into the guts of the editor, I want to create a way to transfer objects into the scene with drop and drag from another window, then I get to figure out how level design will work…...but that’s a whole other post that will tie in with the collision structure.
  7. After much stuffing around getting the code to work (how come copy-pasting code never seems to work first time?) I managed to get the scene rendering in a similar way to where I got up to in the previous attempt, only of course with opengl doing all the drawing, its running at about 5500FPS (yes, thats 5.5K). Hopefully with some additional work over the next few days I'll have the rendering doing exactly what I want, then I'll finally be about to get onto the fun stuff, looks like OpenGL is the way to go for the editor.
  8. That's what today was, I discovered this last night shortly after I wrote the last update, I spent a few hours this morning benchmarking and trying to find where the slowdowns were, and basically every single step of the rendering pipeline was causing major slowdowns. It would be a huge job to overhaul it all to get it faster, and then at the end there is a good chance I would not have something useable. I figure I'm only throwing away a week's worth of work if I get out now, rather then 2 weeks if optimising didnt work. Basically I amde an assumption that code from JavaScript would run faster in C++, but it turns out that is not the case, and I simply dont have the C++ experience to optimize it yet. Previously I have worked with C++, SDL2 and OpenGL so I have the code ready to do that. I know it works well and I'll just have to make do with it. I had my reasons to stay away from openGL with things, but speed easily outweighs them.
  9. Well this is a problem. I got the editor to a point where it can render objects without textures....and it sucks, after going through and optimising a bit the best I can get is 45FPS, compared to the 120FPS the same scene gets in the JavaScript version. For the first time in a while I am stumped....this was not meant to happen since C++ is much faster then JavaScript......I'm not quite sure where to go from here now, do I rewrite large sections of code to try and make it faster, do I turn to an OpenGL implementation to take the drawings away from my control or do I give up on a C++ editor completely.... This is really annoying. -------------------------------------------- After a day of considering my options, and nine holes of golf, I have decided to move this editor to openGL. Hopefully I can match the rendering style of the web engine and figure out how to do the click functionality with it, but I know for a fact it will render anything I throw at it and if I just tried to optimise my code there is a good chance i waste a week and get nowhere. So yay, now I get to chuck a very large segment of the work I have been doing away
  10. ugh, code copying is very dull its just doing nothing but debugging all day with no invention side to play with....but it does have the side effect that I am going over the entire JavaScript rendering process again and that does turn up interesting things. Today I was copying over code that had to do with the zbuffer, 3D clipping and culling, and lighting and I found something really interesting. I was calculating the all the lighting values for vertex in the scene including all those that were not on screen, like those behind me. This is very bad as it is a waste of CPU time to do this on values that aren’t going to be used. It all stemmed from a logic choice to put both the 3D Culling and 3D clipping code in a function together outside the massive render loop. I had original broken off this section of code as it is very long and very complicated as it was all about looking at and modifying vertices in 3D space against various planes, called the viewing frustum. I’m not going to go into it only because I have a basic understanding and it does my head in with the amount of maths in it. The important thing is though, the 3D Culling is a fast method for excluding faces that are not on the screen, something should happen before lighting, but the 3D Clipping requires vertices to have all their world information, which includes lighting information. The solution is simple, break them into two and do the lighting in-between, which is what I did and with 15,000 vertices and 3 light sources I gained about 5 FPS, which in my search for more FPS, is really nice. To do with the stuff I was copying, I also had to muck with the scanline function, which if you remember is the single slowest piece of code in my engine. I found a piece of incorrect maths in it, I had several if statements to deal with an annoying divide by 0 error if you only drew a single pixel, when in fact that 0 should never happen, as the math was supposed to have a +1 on the end. While it was only in setup code and not the scanline loop, it gained me a 1 FPS boost and also fixed a graphical error I had. Anyway, that’s all for now, the Editor is coming along slowly and after some initial problems to do with making it multithreaded the code conversion is working well. Also in progress in my head is a cheap collision model that will most likely get a discussion later on.
  11. Its the day of the doctor, and wow, they really outdid themselves in this episode. No spoilers from me though. Its really good to see them both wrap up some loose ends while also setting up for the next season, which appears is going to be really good.
  12. I am now at the point in the project where I need STRUCTURE and large data stores, mainly because I am rendering quite a lot of things (object, lights, camera, animations etc) and its turning into a mess of gibberish numbers. This is fine for testing, but is will not scale and even having 5 object in the screen is a nightmare to position everything correctly…..no, what I need is an editor….as Construction Kit if you will. The crux point for this is how I want to implement collision, even the basic collision I want to implement is too complex for me to manually create, hence if I need to build a program to design that in, why not expand it to cover everything now. I’ve known this for a while, so planning for this has taken place in my head for ages and the editor will not be JavaScript base….why is that? Mainly because an editor often has to render much more than a game scene, and lag during development helps no-one. That’s why the editor will be a C++ program that will produce a game file that the JavaScript engine understands, a bit like Oblivion’s plugin master files. So C++, that means I’m free of my shackles! OpenGL is available and I have an engine built already that works with OpenGL…..yes and no. I like OpenGL, on a serious C++ game project it is what I would use, but for the purpose of an editor, with lots of mouse interaction on the 3D scene itself, I just don’t know how to do it with OpenGL. I could learn, but where is the fun in that? Let’s do this the Mage way, let’s convert the JavaScript renderer into C++ and have some fun. So the last time I had to build a 3D render from scratch was during an introductory assignment to Graphics Rendering in a university assignment. At the time I was fresh both to this theory, and to C++ as a language, you can imagine that the end result was pretty poor and suffered from all the problems I encounter during the JavaScript build. It also struggled to render 4 of those apples in a black scene…yeah…it was baaaaaad. This time I have quite a lot going for me, I have a working implementation in JavaScript, which is similar enough to C++ that I can copy large amounts of logic across with little conversion. C++ runs are a significant speed boost to JavaScript in your web browser, meaning if I can make it run in JS, it will run in C++. I also have over a years’ experience in C++ now, meaning the memory problems that slowed my program before will be avoided. This way I can avoid having to learn the OpenGL specific way for selecting objects, drawing 2D outline/wireframes over top of my scenes etc etc and just implement them my own way. It also means the editor render will look really similar to the JS render, which is important when designing. This editor creation and integration with the JS game engine will most likely take up a large chunk of time now, so I wouldn’t expect too much for the next few weeks. After which game engine work can begin and it starts turning into something that can be played. UNTIL THEN, feel free to ask any questions related to anything to do with rendering, game engines or anything related to what I’ve said. This stuff is my jam, I really enjoy thinking and talking about it.
  13. Now quads are a nice idea, but in rendering you can only render a flat face, I'd need diagrams to explain why. So here is a quad in blender, i outlined the quad in red, and moved the oppersite corners of the quad up in the air to form a quad that is not flat. Blender does it's best to figure out how it should break up that quad for render (which is the blue line) hence you end up with two triangles instead of the quad you originally had....but the blender renderer is deciding how the quad is split, because in that case, there are two valid ways to divide up the quad into triangles. Now a renderer can interpret that rendering a 'curve' (curves dont exist in computers, mainly because pi is not decimal, curves are just lines with lots of segments), but its not something I want to program, but many non-game engines do do such things. That's not to say quads aren't used in my models, its just I break them down into triangles to make sure I get the correct edge for that face. Some renderers (and Im talking OpenGL and DirectX level graphics rendering here) have a scanline for doing both triangles and flat quads as it is cheaper to have a dedicated quad drawer then to break them into triangles), but by doing so you just amplify the complexities. It much easier to make sure your models just use triangles and take the slight speed hit. I'll just run through quickly how quad differ from triangles at the 2d level (the above stuff applies at the 3D level). So once you have your points in the 2D screen co-ordinates, you have a shape. It can have upwards of some thing like 6 points on it depending on how it clips with the edge of the screen. You then take this shape and break it down into triangles (I use a technique called 'smart triangles', which I realise now is overkill as my shapes are all concave...I must look into that), and then for each triangle you find the left and right edges of the triangle, these are stored and then you go down the left and right edges filling in all the pixels between them. The main difference with quads is the finding of the left and right edges, with triangles its super easy, you have a top point, then use simple checks to find which edge is left and which is right.....quad are harder, as with more point there are more combinations, the left edge could be A - B - C and the right edge A - D, or any combination in between. --------------------------------------------- Bonus Picture: (terrible, terrible land model)
  14. Well actually, a face is 3 vertices (a vertex is a point with location 3 values, XYZ), no matter what (except for some engines that use quads....but those are weird, easier to break quads into two triangles). So yes, you find a face's normal doing a cross product using the three vertices that make it up.....in a program like blender it shows the face normal point out from the center of the face. Since you then know the normal for each face, you can figure out the normal for each vertex. Unlike a vertex, a normal does not have 'location' it represents a normalised vector (normalised means it has a length of 1) of what direction the normal is going. 3D models do not actually store the face normal, since it is really simple to calculate, they do however, store the vertex normals, since they are much harder (since a vertex can be part of any number of faces).
  15. So you understand how a face has a normal, in that it points outwards in the direction that the face is .... facing. In order to find a vertex normal (well, at least one that can be used for smoothing) you take a vertex, then make an average normal by combining all the normals of each face that vertex is apart of.
  16. This will be just be a short update. Remember how in the last post I was complaining about how the 'flat shading' of the faces look horrid, well tonight I finished working on my new lighting model. Before I would calculate the normal of a face and that would be used to get the lighting, then by finding the distance between each vertex and each light I would record how ‘bright’ each vertex was. In order to do some smooth shading instead of calculating the normal for a face, I use a stored one for each vertex. Using this, I do the distance from lights and diffuse lighting check (diffuse is where you compare the normal to the light location and see how directly a vertex/face is pointing towards a light surface) on each vertex and store how bright it is. Despite having to move my lighting code to much earlier in my pipeline, it doesn’t seem to have taken too big a whack to the FPS. I did a quick side by side of the difference between the flat shading and the smooth shading code. The light source is the camera in this picture. Of course, object can still have the faces set to be flat, thats all done by smoothing group in blender and such. -------------------------------------------- So this project is now going to become my summer job for the next 3 months, after which I hopefully should have at least something playable from it. Since what I am doing now if becoming much more about movement rather than pictures, I intend to start up a development vlog using youtube which could also get some more eyes on it. Despite that, I’ll continue to post here with smaller stuff like what I have put up today.
  17. Now its time for a chat about game's animation systems, of which I have started working on. As you would know from modding NPCs and other such do not use a static model for them, they have a model that is rigged up (sometimes called skinning) to a skeleton that is in turned animated through rotations and translations to make the model move ina beleiveable way. Each vertex in the model is rigged to one or more bones in the skeleton with percentage descripting how much that bone can effect the movement of that vertex. This is used to make joints and such in high poly models look good. In my system? well I have quite a few limitations with the systems I am using. For starters I have been exporting my model from blender as .obj, which has 0 support for rigged models or skeletons. What it does have though is something called 'OBJ groups' which I have used to assign vertices to bones....it has the limitation of only letting me assign a vertex to a single bone. While this would ba pain normally, since I can only use low poly models anyway, this limitation doesn't really effect me, and by careful placement of my skeleton bones, I can make a convincing joint anyway. So now I have a way to rig my models to bones....but no way for me to get the skeleton I created in blender out and into a format my engine can read (aka, a JSON file). Solution? hand writing the skeleton file....yes this si tedious and error-prone, but since most skeletons doesnt have a ton of bones in them and you dont tend to make many, this is a decent solution for now. And anyway, this is exsactly how I had to build my skeletons in OBlivion when modding, so I have the process down pat now. The next step to to then export an animation from blender and get that into my engine, that is still a ways off yet, as I just spent 3 days doing quite a bit of file parsing for the skeleton, so I dont want ot jump into file parsing for the animation format. I'll talk about animation in a later post. I want to further expand on and record my technique for rigging my models from obj format. In order to get the vertices into group, I need to seperate them into different object in blender then export, but then leads to duplicated vertices, each one rigged to a different bone....anyone with 3D modelling experience will notice this would amke huge seams at every joint...and you are right. My solution is very brute force, and I had to work the entire thing out in commenting before i even started trying to code it. I record all the vertices as normal, afterwards then I go through and detect all the duplicates and find whatever vertex from the duplicates has the bone with the lowest depth (meaning, fewest parent bones to the root bone). This is stored as the true bone rigging. After this the duplciates are deleted, and any faces that were pointing to a duplicate vertex point to the single first version of it, which now has the correct rigging. In C++ (which my obj to json file converter is written in) it is a nightmare of heavily commented loops to do all this. -------- Lets talk about 3D modelling for a moment, I'm no beginner to creating models, but I am also no artist, you nca expect to see quite a few bad models once I really get developing, and my first person is no exception. To test all this I needed a character rigged up and ready to render. So I was forced to create one. I will say I think I did fairly well on the body, but I really struggled with the head and the hands...most likeley will have to revisit them at some point as they are ugly at the moment. The final vertex count for my character is also only about 250ish which is really nice on the engine, even with the extra computing to do the animaitons. At the moment I just have a couple of bone hardcoded to move using sin and cos waves, so he just flaps his arms around and twists to let me see if everything is moving correctly. On the rendering side of things though it really does show I might need to invest in some sort of soft light approach, as flat faces on a character looks aweful, and I see why games quickly moved to phong lighting. -------- Lastly I must mention, completely by accident I discovered that by multiplying my transformation matrices backwards I could createa single stack matrix to multiply my vertices with, rather then going through and multiplying my vertex by each one on the stack. Apparently this was a bit of common matrix knowledge I had compeletely missed/forgotten.....anyway, doesnt effect FPS at the top level (there is a new bottleneck somewhere that I need to find), but certainly speeds up that section quite a bit. -------- Picture! My character model being bent by the sin/cos animation I'm using for testing. EDIT: There is a a single column of pixels on the far right of my canvas that my 3D scene is not be drawn onto.....I dont when it started but I noticed it tongiht and it's really starting to annoy me.....I will have to go into detective mode tomorrow....most likely a greater than or equal to mistake.
  18. So today I am going to be talking about how to draw the sky into my game engine.....which is much harder than it sounds. WARNING: This is going to be a big post about a since topic and have a fair amount of geometry in it. Most people simply think of the sky as 'blue' but if you actually look at it you’ll see that on the horizon it’s nearly a white colour and towards the spot right above you it’s quite a dark blue, simply put that’s a gradient. To put it into 3D terms It’s actually a radial gradient spread over a half-sphere that surrounds you. Now I need to somehow take that information and put into a 2D form to be the background behind my engine, but also change it depending on how high or low the player is looking with it making sense. Most game engines use a skybox or simply in order to do this, that way they can ‘paint’ a sky texture and let the 3D modelling do the work of adjusting. I can’t really do that. It is simply too expensive to draw such a dome, especially since textures cost so much already in my engine. My solution needs to be algebraic. There are two ways to approach this then, working from the 3D and getting it into 2D, or working from the viewport and getting the 3D, I chose the latter. If you look way back in these posts you’ll see I have a vertical gradient background in one of the pictures, that is what I was working to replace…..hence it formed a good starting point. Now when you think out the horizon it’s a light blue at the bottom and turns into a dark blue the further you look up….that’s a simple vertical gradient. But if you look straight up its darkest in the middle and gets lighter as it goes out….that a simple radial gradient. Oh dear, now I have two different gradients that somehow meet in the middle….this is where the 3D world is screwing with me as I try to work in 2D. The solution I ended up with was to treat the horizon as the out edge of a very large radial gradient, so that the curvature would be small enough that it looks horizontal. So I have a 3D half-sphere with a linear (using linear to keep this simple) radial gradient on it, if I squash this down from the top in a 2D circle it might be useful….while yes, it also has the problem that the gradient is now non-linear….and I really don’t want to deal with non-linear algebra. But it still serves as a good starting point. I use this radial gradient and use the camera to only show part of it, so that as you look up the camera moves towards the middle. It works well but the non-linear gradient makes it look not-quite-right and since I am working pixel-to-pixel with the gradient I cannot make the gradient at the horizon look flat, it has a monster curve on it. I start thinking about non-linear gradient and quickly realise that a Gaussian curve gradient might look better, the advantage of Gaussian is that it tappers off slow at the beginning and end, but quickly in the middle, and if I speed up the end section by doing some exponential division I can make the end tapper off quickly as well, leaving just the beginning ot be slow. That lets my dark section take up more of the sky, and tappers off quickly the closer you get to the horizon. It’s very similar to an exponential curve, but I have a bit more control over the beginning and middle. This is now starting to look much better, but the horizon is still too curved for my liking, instead of a circular gradient, I change it to an oval, this levels out the horizon nicely and it doesn’t look too obvious when looking upwards. Ta-Da! I now have a sky. The sky is stored as a huge texture that is the same width as the camera, but several times taller. By drawing different parts of this texture as the background depending on the x-axis rotation of the camera it gives the illusion of looking up into the sky. Here is a diagram of what I mean. In the background I have the full texture that the algorithm could generate. The Red box represents the texture that is stored and the yellow box is what the camera could look at. By moving the camera up and down it seems as if you are looking up into the sky. Lastly here is a picture of the sky rendered, I have moved the camera outside of the ‘dungeon’ so you can see it. Sadly the whole change while moving upwards illusion is lost in a still image. It looks a bit bland outside of the dungeon though, next up will have to be a distant 2D picture (think the mountains in the distance from 90’s games) that hides the horizon a bit, and also maybe moving this project outside, meaning its would be time for some height maps.
  19. The reason for that is that by teleporting to the other portal you activate it's ontriggerenter, and it puts you back to the other one. Easiest solution, have a variable that is set to 1 when you enter a portal and back to 0 when you ontriggerleave....thne in your ontriggerenter have a if statement to check if the variable is 0. That way you can only be teleported again after you have left the portal you exit from. Sorry I can only give theory and not script code, I haven't done Skyrim Scripting.
  20. Well it would seem the programming bug has gone away for now since I havent done anything on this for about a week, I've still got plenty of ideas I would like to implement into this, but my motivation has dried up....need to go and spend a bit of time doing some other projects and rebuilt enthusiasm for this.....not to mention university is just about to kick into the final few weeks of semester....so that's properly a good thing. On that note I will leave you, and record for myself, a final picture of where the project current stands. Since the last update I tackled some simple lighting, and then created some very simple GUI elements, so that now there is a start menu, options menu and a crosshair in the game. It certainly adds quite a bit of really cool effect into the engine. The lighting is very basic, each vertex is distance checked against the lights in the scene and a 'illumination' value is stored, this is then interpolated along in order to make all the pixels have approximately the correct lighting....in modern engines this interpolation wouldn't happen in this way as it does not make picture perfect lighting....but it'll do for me since it's very cheap to calculate. The UI was really interesting to develop, it is an element I have thought about before when building a game engine, but had never gotten to the point before.....and it works exsactly like I thought it would. Basically you store the location and size of each element, then when a user does a mouse event, like a click, you search through your UI elements for the one that takes up that spot. If you use an ordered array, then by searching back to front you can also pick the correct element if two are overlapping. One thing that implementing a UI was good for aswell was overhauling how the loading worked, now I could have an opening menu with a start menu that then loads up the level, i also got to restructure my files into a better folder structure and stuff around with moving options from outside the HTML5 canvas into interactive UI elements in the screen, hopefully in the future this will be able to be played in fullscreen mode in the browser and look really sweet. So finally to the picture I will leave you with, a great shot from the same camera angle as last time that really gets to show off the progress. Its been a fun few weeks since the update on the 23rd. If you are wondering, that checkerbox rectangle is a button that takes it back to the start menu.
  21. Gods no, I played that game again the other day...it look absolutely terrible now. Graphics wise it uses the ancient 2.5D ray casting technique to draw onto the screen, I at least can match Daggerfall's early true 3D and sprites combination So it makes sense that objects that are not on screen right now are much cheaper to draw....or rather not draw..... the reason for this is despite still requiring all the vertices for every object to be transformed no matter what (since you need to move them to the correct position to even see if they are in front or behind), faces that are formed outside of the viewable area (like behind you etc) can quickly be skipped over by doing a 3D clipping technique. Hence the second copy that happens when a face is going to be drawn never happens, nor anything after that obviously. I hope that answers what you were saying.
  22. I'll give a mini-update and a cool new picture up here....despite everything I'm talking about being over a week old. So I continued my search for slowdown spots in the spare hours I had last week, turning my focus now to the vertex maths, since the screen drawing is about as good as I can make it at the moment. I spent ages simply trying different techniques to speed up vertex maths....but other then pre-declaring some vertices and overwriting them rather then creating new ones with each run....that was very little I could do to sped it up....the reason for this is simple: When I go to start rendering an object I have to make a copy off all the vertices, this way I can transform them without damaging the original data. After I transform all the vertices, I then have to make another copy of the vertices for each face, this is because faces can share vertices, and if one face transforms a vertex for rasterization then it damages the data for the other face. So in the space of a few lines I am basically copying all the vertex data twice.....ergh...what a pain.....and it causes so much slowdown. Using some packing techiques I put the vertex data into a typed array rather then an object so the first copy is much faster....but the second one when faces are done cant be helped....its slow and I cant see a work around. I also cannot do any of the face ruling-out techniques to skip drawing a face at this stage I am in the wrong coordinates space, all i can do is simple 3D clipping......something i really have to look into is finding a way to rule out away facing faces sooner then I am. Anyway, all my efforts do yeild plenty of progress, and I gain plenty of FPS by doing all this......and I now have a fair vertex budget for things on my screen.....by my reckoning I can safely put about 10,000 vertices on screen at once.....this is tiny compared to a normal renderer, but much more then I expected. That apple you've seen in the pictures so far is 60 vertices, in my world, that would have to be a high-poly model....as ridiculous as that sounds. I'm hoping with some engine side object selection I can quickly rule out objects out of view to allow much more in my rendering scenes. Even at 30,000 vertices I still get about 16FPS on my desktop, 12FPS on my laptop. And thats one of the interesting things about this project, since this is JavaScript the whole thing is running on a single core of your CPU......any system with a decent CPU speed can run it at the same sorts of speeds. Personally I'm amazed how far this has come and what JavaScript can actually let me do. Now for a picture....I've finally gone and made some 'art' to put into my scene, rather then that terrible test hallway from before. Since the speed is enough for now I want to move on, I need to still implement the lights correctly...though that will not take long....but what I want to do is implement a UI into the engine, which is something I've long thought about, but never had the chance to implement in a project before. I've also started thinking where to take this.....I definately want to go towards making a game using all this work, and I'm thinking an offline, C++ written level builder may be in the cards to be built in a few months.....tune in next time when I show off some UI features......though that may be some time away yet as I have some other things I need to go back to working on now I've had my fun on this project.
  23. Wow, a few updates in a row here......sometimes i just get on a roll with these sort of things. No pictures on this, as nothing visually changed. I finally started optimizing my code, I quickly found two trouble spots, a huge slowdown of around 100FPS when I rasterised the polygons into triangles (converting the 3D shapes into 2D triangles). This had little to do with the maths however, it was all about the way I had been writing my JavaScript objects. I prefix by saying before I started this project I had limited knowledge of how JavaScript worked, hence this mistake. When I was declaring my objects I was declaring all the properties and functions within the 'function' that javascript uses to define things.....this meant whenever I created an Object of that type, everything within it got declared again. Since I was creating hundreds of triangles/polygons....it got real expensive quickly There is a simple fix for this, put all the declarations for the object in the global scope and use the 'prototype' tag, and only have the constructor code within the actual definition. Works a treat, suddenly the majority of the slowdown that was causing disappeared. Good enough for now, might come back to it later if as I add more it starts to slow down in the maths. The other trouble area was the good old scanline again, where the pixels are saved to 'screen'.....it has been a problem area since day one. Time ofr me to roll up my sleeves and have another crack at it. Its basically slowing me down from 200+FPS to about 30 on this simple scene.....not good. JavaScript doesnt have variables types that you ca nsee, but behind the scenes it still uses them, meaning that alot of int ot float conversion can happen without you seeing it....this is expensive and I need to stop it. I found by bit shifting by 0 (which does nothing) it forces the variables into integer mode....and integers are much better to increment then floats. I then notice that the ZWUV values I interpolate through for texture mapping are causing one hell of a slowdown....which is very annoying as it is only addition...but the problem is it is all floating point addition. However one thing I can do is stop it from doing the WUV interpolating on models that dont have textures. I remove the code I was using to implement wireframe mode....turns out it was wasting 5FPS even when wireframe was not on....gah, was a silly bit of code...get rid of the whole implementation and I'll try and do it better at some other point when I want it again. Now the code to generate textured wall and flat code walls is the same, except that with a texture a whole heap of extra work is done....but in order to do this there are a few if statements to separate the code. I've decided to hard-split this code and duplicate the bits that are the same...not as manageable, but it grabs me another 7-10 FPS so it's worth it. So at the end of all that I manage to increase my FPS on the simple scene from a laggy 20-24 FPS up to a crazy 67-71FPS. Much better. The FPS is completely dependent on how much of the screen is covered.....which I may be able to exploit if I create outside areas and use a gradient sky.
  24. That looks really nice, the style is well thought out.
  25. You are sort of close. Mip maps are used in the sampling step, which is the bit that happens after what I was talking about. The summary is that if you have plenty of memory, a mip map is just a smaller version of the texture, this is used when the texture is far away and you arent sampling every pixel, this way you dont have to try and merge colours from multiple points on the texture on the fly which can be expensive. Did a little bit more today, just a quick job to see how viable doing some flat lighting would be on my models, as it turns out, very viable and only drops the FPS by a couple of points. There are two types of lighting that I know of, flat shading, where the entire face is lit the same, and another one where each vertex is lit and you smoothly spread that over the model....I forget what it's called. Obviously flat shading is cheaper to do, and I feel for my blocky models that by-vertex lighting wont look all that much better. Remember before how I was talking about matrix multiplcation, doing per-vertex would basically double the amount of maths as every vertex would then have to have a normal to be calculated aswell. Doing it per-face with flat shading means significantly less normals to be calculated. If you are familar with the phong lighting model, you'll know about ambient, diffuse and specular. Using these three components you can fake the lighting to look quick good for an area, and it is the most common non-ray tracing way of doing light. That model only really works if you are doing light per-vertex, so for mine I have only used ambient and diffuse. Now onto what that means, ambient is pretty simple, this is the amount of light that lights the object from any angle. This way there is always some light hitting all faces in the scene. Diffuse is then used for light that is actually hitting a face, so the more a face is facing towards a light source the brighter that light source makes it. So how does it look? Well like this: At the moment I have a single global light have has infinite reach, which is good for testing, but of course this still look unrealistic. I need to create a new 'light' type object that can radiate light out realistically, but for the moment, this shows off the lighting.
×
×
  • Create New...