Site Sections

Sunday, September 27, 2009

Game Review - Legendary : PS3 : Spark

Overview:

Legendary is a FPS (First Person Shooter) developed by Spark and published by Gamecock. This title is based on the Pandora's Box mythos which describes a box that was created by the Greek god Zeus for the created woman Pandora as a plague upon the earth. The game describes this box as not actually a Greek mythological object, but a real life tool that has been fought over throughout the course of history as it contains powers that can make kings or destroy civilizations.



The game enters with a secret order called the Black Order hiring your character, Charles Deckard to open the box known as Pandora's Box. Now knowing the legend behind Pandora's Box, something tells me no amount of money in the world would get me to open that box. Regardless, this guy apparently has some major debt to pay off because he actually opens the box. Enter crazy numbers of mythical creatures including werewolves, minotaurs, griffins, golems, a huge kraken (which was a boss fight) and these really annoying faeries.

Now, once Deckard opens the box, a magical signet is impaled into his hand allowing him to suck up a magical life energy from these creatures and using it for not a whole lot other than to heal himself and to power EMP and other similar devices.

The Bad:

The reason I picked this game up was that it was fairly cheap from a used game store, and I like mythical creatures and what better way to show my love of mythical creatures than to fire thousands of rounds of digital ammunition into them? So, I was very disappointed when I found out that much of the game was centered around fighting the Black Order and not fighting awesome critters. Not to say that you do not fight a bunch of mythical beasts, but much of the time it feels like werewolves are being thrown at you just so you have some way of recovering health, since this magical energy is generally not just laying around. There are moments where fighting the creatures is enjoyable, such as the first fight with the griffin and the first fight with the minotaur, however, many of these creature fights feel like you are just suppose to blast a couple hundred rounds of ammo in the creatures general direction until they die, while dodging like crazy in hopes of not being eaten. There really was no good strategy for fighting beyond that and as stated before, the signet isn't much good for fighting any of the really challenging creatures, other than the extra health bar it provides.

Some of the other downfalls of the game were related to actual game play mechanics. Jumping felt like an after thought. You jump too quick and the height is too short. It feels like they decided that they needed jumping for one or two levels based on their design and instead of making sure the rest of the levels couldn't be jump broken, they just decided to make the distance/height of the jump so small that it was impossible to jump except for the few exceptions that it is required.

Also, in some parts of the game, the models when hiding behind things, appear to be providing a clear line of sight on the enemy, however, when firing you still hit the object you are hiding behind. I am aware that this is a common issue in games that result from the model's collision model being less detailed than the in game rendered model for computational reasons, yet in this title it just felt like it was really an problem and at times a hinderance. In fact, sometimes it felt like I was standing next to a fairly primitively shaped object and would still have the problem of firing around the object. The most amusing thing is that the AI apparently had trouble with this too, as it is notoriously bad at throwing grenades and would end up killing itself in the process. This made me laugh more times than I can count.

Other minor things included: door sizes being too small in areas, making it frustratingly difficult to traverse in some locations. Also, some levels were laid out very challenging, with check points too far apart in my opinion. I hate having to spend 15 minutes to get to right before the next check point, only to have 4-6 werewolves back me in a corner where the controls start acting up making it so I can not move.  Not to say that I do not like challenging games, but there were at least 3-4 spots in this game where I just felt ripped off by the game as if it were denying me my justly deserved checkpoint.

The Good: 

Some positives of the game; I thought the voice acting was decent... not the best I've ever heard, but no Resident Evil 1 for the Playstation by any means. Also, cut scenes, although couldn't be skipped, were entertaining and the between chapter art work was fairly nice. All hand painted and well narrated. There was also one part of the first "chapter" that I really liked, where the faeries move around an EMP device that you are suppose to charge. I thought that part was well executed and I give that script writer props and I hope he got a raise for that.

 I also really enjoyed the ending.

***SPOILER ALERT***

Not for its cheesy way it kills the bad guy, but because of the way that it didn't meet my expectations of being a Bioshock rip-off. I was totally expecting the game to take the overly used twist of "The guys you are working for are really the bad guys and they have just been playing you this whole time you poor defenseless player! Oh wait, your not defenseless, lets go kick their ass!". No, it didn't take that twist (well not entirely), instead Deckard becomes friends with the animals after busting out of the Councils jail where he is imprisoned for study of his signet after destroying the original box. However, none of this part is playable, instead they just choose to wrap up the story nicely in cinematic form. I do give them credit for this as it is not done as often as was my expected ending to a good vs. evil vs. monsters game ending is.

***END SPOILER ALERT***

Overall, I enjoyed playing the game, it was a nice 3 day experience for me. It doesn't win any awards in my book, and the game play did feel a little lacking. I would rather have seen them rip-off Bioshock and provide some sort of mini game while hacking door locks instead of just having me wait 20 seconds in a room with no dangers, just cause they want to add some extra button press usage. However, story line was decent, acting wasn't "terrible" and the art and animation of the creatures (especially the Griffins) were well done. I would say that I would give this game a 55 out of 100, or about 5 out of 10 stars, or an C to a C+, depending on your scale. Either way, I would recommend this game if you're bored, have nothing else that you really want to play, or do not have much money in these hard economic times to buy a good game, but still want to play video games instead of finding a real job. (Thats right you heard me you bum, get off my couch and get a real job!)

Wednesday, September 23, 2009

Google SideWiki and Gaming

I have recently heard about this new tool that Google is coming out with called SideWiki. Basically what this does is allow Google toolbar users to add wiki like elements to a side bar on web pages that they visit. These are then visible to everyone who uses SideWiki that visit a web page.

My interest is to see what people do with this tool in the arena of altered reality gaming. We have seen this in the past with, for example, the Nine Inch Nails site when Trent Reznor was preparing to release the Year Zero album. The series of sites had unique and hidden context that could only be discovered by closely inspecting the sites.

The concept is interesting, not really what I would call a game, but could conceivably be turned into one. Such as the legendary Majestic from EA back in 2001. That game had a great concept and a real story line. What I imagine for the Google SideWiki is something more like your standard pen and paper RPG, where perhaps one group of people create a plot line based on a series of sites that the other group of people playing must discover/navigate to collect all of the clues to solve some sort of web mystery. Perhaps a secret web site that they could log into and find out if they were the first to figure out all the clues. The best part is that if you found a clue, you could in theory, alter the SideWiki to throw others off of the trail if they weren't smart enough to look at the history.

I think it is an interesting idea and one that maybe other might find interesting as well. The thought makes me think of geo-cacheing, something I find fascinating but would never personally partake in. I hope someone takes this idea and runs with it.

Friday, September 18, 2009

Ogre3D updates Licensing

Ogre3D, the open source 3D graphics engine by Torus Knot Software Ltd had decided to change their licensing from their current LGPL licensing to the MIT Licensing. This is really exciting as this new licensing that they will be using in their up and comming release 1.7 is less restrictive by not requiring developers who use the free open source engine to release any base source code changes.

The reasoning behind this is to provide more incentive to get commercial and non-commercial to use and extend the Ogre engine in their projects. I am personally hoping that this does push the Ogre engine a bit more towards the mainstream game development scene. I hope that more companies will consider using this flexable and free engine in their development model. I think that using Open Source software in mainstream development will help to reduce the large cost of game developent and allow developers more time and money to be more flexable and creative in the types of games that they are creating. By allowing them to keep their proprietary custom code private, I feel that this will encourage them to use the engine more freely.

Monday, September 14, 2009

A Few Site Changes - Games I Own List

Added a few changes to the site today. I added a list of PS3 Games that I own. I will try to keep this current as I acquire more titles. Sorry, I don't own an XBox, and I don't really plan on getting one since almost every title that I want to play is either on PS3 or computer.

If you also own one of these titles for PS3, and you feel like playing some multiplayer, you can get a hold of me on the Play Station Network at DKGameStudios. Or just drop me a line at my email address (or a comment here) letting me know your user name and I'll add you to my friends list.

Happy Gaming!

Wednesday, September 9, 2009

Tool Review - Redmine : Project Management Software

Review time again, this time I will be reviewing a utility that I have used on a couple of projects that I have worked on. It is a great utility for managing your personal/independent projects and project teams. The software is called Redmine. You can find it here at http://www.redmine.org/.

The Redmine web server allows you to create projects and add employees to the project. Each employee can be given different rolls on the project providing customization over the project development process and data representation to individual users. The software is very flexible, allowing for the creation of custom work flows and project specific milestones. Gaunt chart features are available project leaders for use in tracking individual tasks as well as entire milestones or projects.


Wiki features are also provided, allowing for streamline design documentation functionality. This is really handy when it comes to group development, documentation can be updated effortlessly while team members are updating status. The Redmine application also supports integrating with SVN servers and files can be browsed directly from the Redmine site. You can even use the revision numbers as links in the wiki pages.


Built in bug tracking software also provides a centralized bug tracking system and task assignment functionality for project managers, these bugs and tasks are easily linked to milestones, where progress of individual tasks attribute towards milestone percentages.


The system is easy to use and is fully customizable since it is built on Ruby and the full source is provided for your individual customization. I recommend anyone who is looking to start up a project for themselves and their friends, Independent game developers who can not afford to spend a lot of money on bug tracking and project management suites, or anyone who would like to be a bit more organized about their development practices. The best demonstration of the system is to simply go to the Redmine site as they use their own software to manage their external face of their project. I have found this software tremendously useful and I hope by spreading the word, others will too.

Monday, September 7, 2009

The Origins of DK

So about once every month or so, I get the same question. "What does DK stand for? Does it mean Donkey Kong? You know that's not very original..." So I want to set the story straight.

It doesn't stand for Donkey Kong, plain and simple. I am a bit more original than that (at least I think so) and would never, ever, ever, use a name from a character in a game as the title in my blog.

What it does stand for is Dranco Karanth. I'm sure you are all saying to yourself, "Who?". Let me explain.

When I was about 10 years old, I used to play this game called Legend of Kesmai. Google it if you are unfamiliar with what it is. It was an awesome MMORPG around the time of Ultima Online. The servers were originally hosted on GameStorm before the company went toes up. Anyways, Dranco Karanth was my first character ever in an MMORPG, in this case he was basically a warrior priest (they called them "Thaums" for short, I can't for the life of me remember why). Anyways, as the years progressed and I started playing D&D, the character changed from a priest to a red mage and he picked up some back story. A few more years later and I designed my first "World" around him and his friends and even wrote the beginnings of a book that some day I hope to finish. He has always been in my head and in my heart and is my biggest inspiration for making games and designing worlds. The DK identifier is a simplification and unfortunately holds a general connotation with a big furry monkey from Nintendo and that is unfortunate, but who would want to visit a blog called Dranco Karanth Game Studios???

That's all I have on that matter, if you don't care for it I'm sorry, but it is something special to me and well, frankly, this is my blog so tough :).

Thursday, September 3, 2009

Philosophy of Software Test

I have been a test software engineer professionally for three years (going on four). I have done everything from writing automated GUI control test scripts in visual basic to embedded test systems engineering in c for a wind river operating system running Ada with c interfaces. I know that software test is important for any application. I have seen product fail test evaluation that would have gone to the customer otherwise.

Unfortunately not everyone has the same opinion that I do on the matter. Some feel that test software is wasted code, providing no economic gain in the long term of the project. These people tend to be the ones who write the checks for your project so usually you have to do some serious convincing to get them on board. Once you do, you still need to have a good method of approaching your test system otherwise you will be wasting money, a lot of money.

Here I will try to lay out some basic principles and guidelines to follow when writing test code for any project.

Step 1: Determine your systems primary method of outputting debug information
Many applications use the standard console window as their primary source of debug output. I do not recommend this, the reason being that once you have 100+ modules that you are trying to integrate and test, the debug information spewing to your console will be unreadable.

A better solution is to output all of your debug to a common log file for your system. I will elaborate more as to why only one log versus one for each module in a bit.

Step 2: Design your Test Strategy Early
This is probably the biggest problem for people. They know what type of software they want to write, so they go for it. Then approximately 50 - 70% of the way through development, they realize that their system is poorly designed and half of their code doesn't work. The reason for this is that generally people do not take into consideration that it generally takes a while to get a system into a testable state. By the time you have gotten to this point, so much of your code could be wrong and not true to the original design of the system. Trust me on this one, I have worked on major projects that have gone astray from their original design. No matter how well it functions after that, you are always bending your own rules to get stuff to work right. This rule bending is painful and costly, sometimes whole portions of the system need to be redesigned to accommodate some minor tweak made to fix an issue that resulted in a poorly structured architecture/test combination.

The key to success is to decide on how you are going to systematically test each component (software or hardware) before integrating it into the entire system. This is called a functional test or unit driven test, depending on what camp you come from. The idea here is that each module has a set of tests that test out its basic functions as they would be used in the actual system. Now, I do not mean to say that you need to write a whole platform to test the module, that would defeat the purpose of testing. What I mean is that you create a harness to place your module in that exercises the functions input and output to verify that your internal algorithms function appropriately. It is probably a good idea if this is a centralized module that will allow you to turn on and off specific modules for test so that way as you incorporate more modules in your system, they are not only easy to add, they can be tested together within the same test structure.

An example of this is a custom dictionary. Say your custom dictionary is going to be used by two separate modules within the system. For example a means of passing data between two separate threads. Now, if you write your dictionary to have a set of tests that test each of its functions by supplying artificial data for each of its fringe cases, then you can be confident that the Dictionary itself is functional. Now, once you start implementing the worker threads that are going to access this dictionary, you will obviously want to test each of their functions (including the function to write or read data from the dictionary) in a similar manner as you did the dictionary in the first place. This all seems to check out, all is going well. Now once we start integrating, we see a problem. For some reason we are getting corrupt data from one module in the other module. We can't just run our tests because they only prove out their individual components. However, if we design our system in such a way that a fully tested module can be used to write data to the dictionary and just the test that is used to test the dictionaries read function can be used to verify the dictionary, you have now eliminated one third of the potential problem area of the system. The same can then be done against the other module.

This type of approach will help you narrow down the possible problem areas of your system. A useful strategy that I have implemented in the past for checking for memory leaks within a data structure, was to create a separate test module that simply queried my memory manager to print out its allocation table. I then set up this test module to run between integration components and modules self tests. This provided me a way to quickly identify whether a section of the system was using more memory than was previously expected. It was also useful because it tested out the functions of my memory manager as well as the memory managers integration with the other components in the system.


Step 3: Let your Architects/Designers Identify your tests
The reason for this one is that unit testing should be done before any system code is written, and who else better knows what data will be required for input and output of your functions than your Architects.

Step 4: Identify WHAT to test and HOW MUCH to test it
Of course, if you are the architect, then this section is for you.

This is actually a tricky question. Many times your test team will want to test every component and module in the system. In a perfect world, this would be the case all the time. However, in the real world, cost and schedule demand that you use agile practices and identify key areas of testing. If you were to test everything, writing the test code would take more time than writing the actual code. Performing modular testing is a part of the agile development practice because it allows you to release stable code and integrate new features into your software quicker with less bugs, yet it still requires too much time generally to perform modular testing on all components in the system. The trick is to rely on your foundation classes.

API's such as the .net framework and the Apple OS are expected to be stable. They perform their own unit testing and provide their APIs to test developers to work with and exercise before they release them to the general public. By utilizing their work, you alleviate having to write most of your own data structures, and as a result save both development time and test development and execution time. Many companies seem to not want to rely on these APIs from what I have experienced. I am not sure as to the reason for this, but if faced with the dilemma of either using a pre-built table class or to roll your own, think twice about rolling your own. When you are writing software for yourself or for your company, its not about academics, its about money, its about maintainability, and its about rapid stable development.

Also, it is unfortunate, but anytime your company runs into either a budget or schedule crunch, test will be the first thing to go. Manager's priorities lie with meeting their deadlines, not making sure the product is 100% quality assured. This is even more the case when it comes to writing test software to test out hardware. Generally, a computer engineer designing a board for some device will perform a flying probe or a functional board test to verify that their module works (similar to my description of the unit driven testing of software above). As a result, if you can not finish your testing in time for the product release, more than likely the computer engineer/manager will determine that the functional board test was sufficient to satisfy that the product is working... This is wrong. The reasoning behind why it is wrong is the same reasoning behind why it is possible for a module to test out 100% on a unit driven test, but fail once it is integrated into the rest of the software system. It is impossible to fully understand how two devices are going to behave together prior to actually hooking them up and making them communicate with each other.

Step 5: Debug Information - Better Information is Better Information
As I stated before, you really should centralize your debug data. It makes it much faster when sorting through your runtime output than if it was in individual files for each module. I know this seems counter intuitive, but realize that text editors, along with processors and ram, has improved over the years and searching a text file is no longer a computationally heavy task. Well, this is assuming you don't have a hundred megs of output, which may happen at times but most of the time it won't.

Now what I mean by Better Information is Better Information is that more information is not usually proportional to the quality of information that is being output. Sometimes having too much information is more detrimental than having too little. On the other hand, having too little information can also mask your problem. The best bet is to determine early a format for your debug output. This will allow you to quickly identify the major components of the debug output messages. My preferred format is:

[module debug information is coming from]:[message (error, recovery, status)], [message specific data], [expected results], [actual results] [optional time stamp].

This format is uniform and quickly identifiable. For example if you have a player class that checks collision and you wanted to identify during runtime if you collided with other objects in the scene, it might look like this.

"Player: Error, Collision encountered with [object] at (x, y, z), Collision flag = true, [object] Type should not be collidable." 

Now here we can see that there was an Error thrown by our error manager framework. It was output to the log, and it contains all the data we need to see what is going on. It appears that the problem lies in the Players collision function (at least at first glance) and that the [objects] collision type was not set to collidable. If this is not the case, the only other possibility is that the [objects] collision mode read function is not passing data correctly back to the player class for some reason, either the player is overriding the data it receives, or it is corrupt for some reason when it gets there. I think this is personally much better than something that looks like this:

t: 0
Player: (x, y, z)
Object: (x2,y2,z2)
t: 1
Player: (x, y, z)
Object: (x2,y2,z2)
t: 2
Player: (x, y, z)
Object: (x2,y2,z2)
t: 3
Player: (x, y, z)
Object: (x2,y2,z2)
Collision: (x, y, z)

Or even worse, no data at all. The difference here is that we are only reporting on a failure or critical status in the better version. This helps cull data that is unimportant to us, making finding what we are looking for much easier.

I could continue to talk about this topic all day, however this post is getting a bit large so I will cut this session short for now. I will probably be adding additional little discussions about testing philosophy here as that is what I do for a living. I am starting to implement a new game idea and I will be implementing some of these things in that title. I will share my experience doing so here. If you have any comments on this, I am always willing to be persuaded to better methods, so don't feel shy about telling me you think I am wrong. I hope this has helped some of you identify your weaknesses in your testing strategies. Remember, its O.K. to be OCD when it comes to software perfection.