The New 8-bit Heroes: New NES game and creation documentary
Created by Joe Granato
Latest Updates from Our Project:
A shorter, bite sized update...
over 8 years ago
– Wed, Nov 01, 2017 at 09:46:19 PM
*****IF YOU SEE THIS UPDATE, PLEASE LIKE OR COMMENT...IT SEEMS A LOT OF PEOPLE ARE NO LONGER GETTING NOTIFIED, AND I'D LIKE TO TRACK THAT SO YOU ALL KNOW WHAT IS HAPPENING WITH THE PROJECT*****
It's been suggested by some backers that I give a very short, succinct update on the project and its three components. If you would like to know more specifics, please see the update that preceded this one, as it is close to 10,000 words, with videos and images, giving a very detailed account of where we are, what we've done and accomplished, and what is left.
TO DO:
THE FILM:
Additional legal clearances, making sure it is ready for distribution
Working with a distributor to get it to platforms.
Finalize official BluRay artwork, bonus material.
EXPECTED DELIVERY: digital, Jan 2018. Physical, add manufacturing time.
THE GAME
Inject all of the narrative
Fix bugs as they are found
Finalize boss / strange enemy mechanics
Free up enough memory for two additional songs
Revise npc verbiage to fit within the memory constraints
Play test, play test, play test.
Finish *song effects* (spells)
Finalize art, manual
Manufacturing
EXPECTED DELIVERY: Unknown. Hopeful at the early part of 2018 (questions on this, please see details in the previous update or PM us)
THE 'TUTORIALS'
YouTube based set of tutorials, coming soon
NESmaker development tools, exposed in Jan of 2018
EXPECTED DELIVERY: V 0.1 expected in March 2018, Final version dependent on funds raised for additional expansion development of NESmaker.
The Game, The Film, The Developer's Resources...
over 8 years ago
– Wed, Oct 25, 2017 at 09:50:45 AM
PROLOGUE: Three years.
I can't believe how the time has flown, how much we've learned, and how much life has transpired since we began this project. We have been fortunate enough to meet so many of you in our travels. Others rely solely on Kickstarter feedback for the information. Especially for those people, here is an epic update (seriously, pour yourself your favorite libation and find a nice shade tree to sit under...) to look at where we've been, where we are, and where we're going. The extreme majority of you have been incredibly patient with this project, and so in turn I'm going to patiently write a long account of all things related to The New 8-bit Heroes, and what the last three years have looked like from our end.
One of the most common questions or concerns that I get is about the completion date of the game itself. There are some who somehow believe that we've given favor to the other components to The New 8-bit Heroes project (the documentary, the development tools), and that those are in some way incidental or other projects altogether. To be extremely clear, this has always been a multi-tiered project, with each component sharing equal weight.
From the original Kickstarter video: A game, a film, and "tutorials"
What I want to do with this update is to discuss each part of the project, what our initial expectations were, and the development of each until now. Let's start with the film.
PART 1: THE NEW 8-BIT HEROES FILM
For those that don't know, The New 8-bit Heroes film was actually born out of failure. The set up for the project is true. I returned to my parents home and found old childhood illustrations mixed in with boxes of NES stuff. However, the impetus for finding all of these things was the filming of a segment for It's Dangerous to Go Alone, The Movie, a documentary about the influence of the Legend of Zelda (and other game series from the 8-bit era) on shaping contemporary media. You can see the trailer here:
The opening shots of The New 8-bit Heroes were actually filmed for the stalled Zelda documentary...
The New 8-bit Heroes was born out of a failed Zelda documentary
As part of this trip home, I went searching for my NES and my collection of games. I filmed braving the blizzard for the Zelda film. And yes, I did find that box. But I also found, right with it, the childhood illustrations for Mystic Searches.
When the Zelda documentary imploded due to a lack of necessary funding, I was more than a little disappointed. I honestly think it was this disappointment that led me to meditate on failed creative pursuits, which ended up leading to deep meditation on my first *failed* creative pursuit - the designs for that NES game Mystic Searches I'd done at 8 years old. As a human story, that concept resonated with me, and I wondered if it might make for the backbone of an interesting film on its own. In fact, in the original longer edit of The New 8-bit Heroes, there are a lot of references to the Zelda documentary failing. But I digress...
Judging by the support we received for The New 8-bit Heroes project, by all of you, by the homebrew development communities, by press and gaming publications, this was a project that people wanted to see.
Honestly, when I started work on this film, I had no idea what the end result would look like. My initial plan was for a low key documentary that centered around behind the scenes technical info of developing for the Nintendo Entertainment System while chronicling the active and relatively unknown homebrew scene. Essentially, we just started filming things at random that might have been related, whether directly or tangentially. The very first footage we actively filmed for this project was on October 12th, 2014 in Atlanta Georgia, where we had a fleeting opportunity to sit down with Tommy Tallarico from Video Games Live, whose video game composing career dates all the way back to the NES era. Executive Producer John Lagerholm, original composer for Mystic Searches Jessey Foster, Austin and myself all crammed into my tiny car with a full set worth of camera equipment and ventured 9 hours north and sat down during sound check (the piano tuning sounds rather ominous in the background!) to pick his brain about developing good music for the archaic system.
Filming Tommy Tallarico
It was an opportunistic film session; we didn't have a clear narrative for the film yet, and we didn't know exactly what we could use from an interview with him, but we didn't want to miss the opportunity. So we cast a wide net, capturing a 40 minute interview.
That's how it started. And for the next year or so, it set the tone for the entire process. I'd be fortunate enough to travel for work, or I'd use vacation time to take a long weekend to attend retro gaming conventions, and I'd find homebrewers or former NES artists that were within a few hundred miles of wherever I was traveling. Fueled by copious amounts of energy drink and carrying only what I could strap to my back, I darted around the country in countless rental cars getting every piece of the story I could. Often times, Austin would accompany me.
We continued to learn about more stories that we wanted to explore. Over the next two years, our travels took us all over the continental US, visiting several cities multiple times. Filming was sometimes complicated. There were instances where both Austin and I needed to be in shots, or where we needed multiple cameras rolling on an event or interview. Due to the amount of traveling, it wasn't conducive to the small budget of this project to hire crew or even travel with high end gear. So just about every shoot after the Atlanta one was approached as minimalist. We often had to rely on practical lighting (the lights actually in the room as opposed to production lights), and at least a few times relied on people who had never operated a camera before to be button pressers for significant scenes.
Connect the dots...
And the stories just kept getting more and more interesting. From a sit down with best selling fantasy novelist Piers Anthony (Xanth, Incarnations of Immortality) to speaking about music and composition with personal influence and Grammy winner David Sardy (Rage Against the Machine, Red Hot Chilli Peppers, System of a Down), to being invited to the set of the Goldbergs to wax poetic about the NES era with fellow 80's kid Adam F. Goldberg, to great sessions with youtube personalities like James Rolfe, John Lester, Norman Caruso, and Pat Contri, to hearing the stories of a host of homebrewers...we had more content that we could possibly know what to do with. The collected footage of supporting interviewees went far beyond what we could've hoped for in our best case scenario.
These are many of the awesome people who were part of this film.
The downside of all of this was that with every addition to our growing roster of personalities, our growing personal accounts, our growing list of compelling stories, the production time for the project increased exponentially. Keep in mind, every weekend we took a trip to conduct these interviews was one more weekend away from editing, from programming and art assets for the game, from forward progress.
Then, of course, in between these adventures we were also filming our own adventure and our own trials of trying to create Mystic Searches. There were technical issues. There were narrative issues. There were art issues. It was also challenging to know how and when to film these things, since we were both the ones going through them and had to be the filmmakers. Austin and I would get into a creative disagreement or be articulating some problem on Facebook, and we'd say, "Wait, stop...we need to film this." Then we'd reach out to any of out friends who happened to be available (starting at the top with those who were well versed behind a camera) and pick up the conversation face to face with the cameras rolling.
Sometimes, we set up the cameras, hit record, and hoped for the best...
Other times, we had to dummy up shoots of conversations that had taken place on line earlier in the week
When principle photography was completed, we had shot over 7TB of footage.
Wow! 7tb of footage!
To put in perspective of just how much footage that is, here's a ten second clip of David Markey's film projector as we cued it up to watch The Year Punk Broke. it is approximately 40MB.
10s = 40mb
So, one second of HD footage captured with our cameras is approximately 4mb; and 7tb is equal to 7000000 MB. Which means we captured roughly 1750000 seconds of film...which is about 29166 minutes, or 486 hours of footage. To make that number easier to wrap our heads around, we'd have to sit and watch footage for 20 days straight without a break to see every bit of it. Where does one even begin to try to carve a two hour story out of that?
It was in January of 2016 that I began my first attempt at editing it all together. I'd been doing the grunt work as we progressed - cataloguing footage, synching camera angles and audio tracks, marking good verbiage and choosing best angles, trying best I could to clean up instances where our guerrilla style filmmaking had led to corrupted cards or bad audio or uneven lighting. My first edit was chronological, and it told our story from 2014 to present. It took me well over two months of concerted effort to put it together. It was four hours long. And it was horrible.
When I say horrible, I mean it was pretty much unwatchable, and that's not even taking into account the length. While there were plenty of interesting pieces, as a film it just sort of meandered aimlessly, and honestly had no thematic core. It was just a random string of things that happened to people that, unless you knew them, you probably wouldn't care about. I've done some wedding videography in my early career, and sometimes brides are notorious for wanting "the entire wedding". I always insist on a highlight reel, because no one, not even the bride and groom, want to watch the hour ceremony and the 3 hour reception in real time. I liken the experience of the first edit to watching someone else's boring wedding video. No matter how invested the people were who were part of the events being filmed, it just didn't translate well to a watchable experience. I became pretty depressed about this. I never showed anyone that edit. And it had a funny way of really challenging my confidence as a storyteller.
In March that year, I took a week off of work to attempt to structure something more watchable out of the collected footage. After all, there was enough footage there to make several films! For kicks, I began reorganizing the footage not by chronology, but by thematic content. It took forever, but I began to take like sound bites and stringing them together, no matter when in the project's lifespan we filmed them being said. I took similar visual iconography and lumped it together. All of the sudden, these very powerful new themes began to jump out at me. Doing this seemed to line everything up, and it made me really understand the story that we were telling. It wasn't the story about making a video game for a 30 year old system. It wasn't the story about the culture of people who still do it. It was a look at the ambitions we all have in our childhood that we shed as our right of passage into adulthood, and about the unintended consequences of reintroducing those silly ideas and passions into our adult lives. It became a story that any creative could relate to, and it became even more personal a story for me to tell. And it actually harkened back to the fact that this film was born out of the failure of the Zelda documentary, and how important failure can sometimes be.
Things moved fairly quickly from there. By May, I hosted a first showing to Austin, our wives, and a few filmmakers whose critical opinions we trusted. We all sat around with notepads and watched a rough assembly edit (which had black screens and title texts for the many missing components, and had close to no music). This edit clocked in at around two hours and fifteen minutes. Still too long by at least a half hour, but much more taught than 4 hours!
Without saying a word to each other, I had everyone write down any thoughts they had as the film played, and at the end, we had an open forum discussion about what they'd written down. There were a lot of places where their notes were similar, which really helped me to determine what was working and what was not. I used that feedback to construct an even tighter edit. I secured the rights to music that I wanted to use, the team cranked out the titles and a few animation sequences, and as summer was wrapping up, we were close to a finished film. There were a handful of shots that I'll call the *connective tissue* shots we had to stage. For instance, general b-roll footage of being frustrated at coding or writing in notebooks.
In the fall of 2016, we premiered the film to a sold out crowd here in the project's town of origin, Lakewood Ranch, FL. Of course, we arrived in style...
That's just how we roll...
Red Carpet
The line forms here...
After the premiere, we hit the road a bit and held screenings around the country. We did the NY and LA showings, one at the Portland Retro Gaming Expo, one in my former city of Baltimore, and one in the town where I grew up, which was especially significant. Not only was it significant in that I got to reconnect with friends and family I hadn't seen in decades, or due to the warm welcome we got from press and news organizations, but there was something particularly relevant about the theater where we held the showing...
After our limited screenings, the film entered the festival circuit, where it's been for the past year. It did fairly well, being selected for film festivals internationally and winning some awards.
Along with strong festival showings, it received some great press. Here are a few examples:
The latest festival was just a few weeks ago, as The New 8-bit Heroes opened the Nottingham International Film Festival in the UK, which is humbling to think about. A project that began with finding a few silly illustrations from youth has turned into a creative work that has gained international attention. I am overwhelmed by this every single day.
So what's next for the film? Is it done? Well...sort of.
We've been discussing distribution possibilities with several entities. At the same time, we're crossing the t's and dotting the...lowercase j's, making sure everything is good to go from a legal perspective. Also, we are still finishing the extended features for the bluray / bonus digital material. Again, out of the 7tb of footage it would be a shame not to share with you some of the great stuff that had to be left on the cutting room floor, including extended interviews with some of the people listed above! But other than that, the film is finished, out in the world, and very close to being in the form that you can watch in your own home. People have seen it, and the response has been incredibly positive.
PART 2: MYSTIC SEARCHES, the new NES game
Believe it or not, we have had a proper NES game since December of 2014. Our base mechanics have been in place for a very long time. Here is the first little proof of concept test demo for reference:
Way back in our updates, you can even see some early testing of this at our '14 "Naughty or Nice Christmas Party". But let's talk about the long, strange, tumultuous journey from then to now.
For approximately six months after launching the project, I buried myself in ASM 6502, the language used to program NES games. Once I got the hang of the actual syntax and learned the basics of what was possible and how to achieve it, the real learning began. If you've kept up with the updates, you've probably noticed that I've mentioned time and time again that memory mapping, not the language, is the real barrier to programming more complex things for the NES.
At the 6 month mark, I had four interconnected arbitrary screens that generally resembled the video above. The player had health that was represented at the top. We were using a code base called FamiTone to handle music and SFX, and we had different numbers of the same monster on each screen (which had hit points). We had developed simple collision detection with background objects, and had managed to program solid, walkable, hurt tiles, ice, and holes in the ground. A lot of incidental things like hurt recoil or game restarting when out of health and whatnot were present. It was essentially a glorified tech demo.
Unfortunately, as we neared showcasing this particular tech demo at that year's Portland Retro Gaming Expo swap meet, we found that we were running out of room. Only four screens, one song, limited graphics, one type of AI, and we were running out of room. It was then we had to begin learning about the various mappers and how they operate.
A mapper inside of a NES cartridge loads different chunks of data at different times so that you're not bound by the measly 32 kb memory limit of the NES. You can only have 32 kb at a time loaded, but you can swap out chunks to perform different routines, load different data, etc.
There are 256 types of mapper for the NES, each with it's own particulars (also, some of them are *bad* and don't work or are not supported). Some work fine on hardware, but are not supported on emulators. A lot of the differences are in how granular they load data. Some load 2kb at a time, some swap out the whole 32 kb, some have 4 8kb banks. Some use ROM for graphics, while some use RAM for graphics.
The mapper that appealed to us most was mapper 30, which is actually a post-market mapper. It swaps in 16kb banks, where one 16kb chunk is always static, and the other can be swapped out quickly and arbitrarily. We also made the switch from using CHR-ROM to CHR-RAM. With CHR-ROM, the graphics are hard-coded onto the graphics chip in the cartridge. With CHR-ROM, the graphics are changed via the mapper, and can only be as granular as the mapper will allow (for instance, if you wanted to load in new monster tiles, you might also have to use up ROM space for duplicates of hero tiles on both monster tile sheets, because the mapper only allows to load full tile sheets). With CHR-RAM, graphics are copied from the ROM into RAM and pushed to the picture processing unit. This allows much more granular tileset changes. For instance, in our game, we can change down to a single tile in a tileset at any time. If that whole paragraph sounds confusing or a little nebulous, imagine going forward with trying to choose a mapper and ROM vs RAM without even knowing the right questions to ask. That was where we found ourselves.
When the dust finally settled, we had a lot of code to re-write. The entire way that we loaded graphics had to change. We had to reorganize our code so that our level data was in its own bank to be called when the level loading routine was called, and put our music in its own bank to be called when music was being updated, etc. All the while, my handle on ASM was still rather shaky. This took months to get right, and we ended up just showing the 4 screen demo at PRGE Swap Meet.
The night before the PRGE Swap Meet
But, the test audience still seemed impressed with our progress!
It was around then that a lot of obvious questions started to emerge. How much music could we fit in the game? How many tilesets of graphics (for geography, for monsters, for cut-scenes)? How much text? Now that we had fairly rigid constraints of our banks, could we calculate how much was possible?
We brought on another artist by the name of Erin Johnson. His sprite work was amazing. However, even though it looked very NES, and it *seemed* to follow all of the NES rules for graphics, it went way beyond the capability for what we'd be able to do unless we wanted to severely limit how many screens our game could have!
Mockups...with too much detail...
These were some of the great mockups...they adhered to the color constraints (truly, only 9 colors + black on a screen, since the other three are used for the hud), and even to the color *grouping* constraints (only 3 colors + black in a 16x16 pixel area). But the sheer number of individual tiles that make up these screens was far beyond what could be loaded at any given time without some serious NES trickery, and even if we went down the road of NES trickery, it would mean sacrificing ROM space for more graphics, for which we'd have to sacrifice game mechanics or songs or text or some other thing.
So instead, we began putting together templates for the various geographic areas for the game. Every geographic area had unique ground types, trees, features...the land of the dead might have gravestones where the forest might have rocks. The city might have pots where the desert might have cactuses. That sort of thing. Here are some examples of tilesets that were created during that period; a desert, the beach town, the wastelands:
Tilesets and mock screens by Erin Johnson
We made tilestes for all the geographical areas this way, and, once again, started playing the game of detailed graphics versus size of world versus complexity of game versus amount of monsters versus music/text/mechanics....
We held massive screen building sessions with the three of us where we'd spend hours modifying art assets, building screens, and assigning appropriate collision and attribute data in large hex tables.
Late Night Screen Building Sessions with Austin and Erin
We made a few diverse monsters that behaved differently and manually programmed their behaviors, putting them on screens to demonstrate variety of play. We programmed some basic game-like states...a start screen with its own music, dropables when you defeated monsters, NPC cut scenes. They were all things that made it feel very game-like, even though it was still an aimless, loose connection of screens. It certainly demonstrated the style and feel of the game. We took it out into the world to a handful of gaming conventions.
Testing Mystic Searches at our very 80's booth at RetroGameCon in Syracuse
This brings us to the summer of 2015. At that point, it looked like we had a very clear and direct path to the *finish line*.
It was around then that a confluence of circumstances really began to mix things up. Adam F. Goldberg invited us to have this version of the prototype appear on the show. Telltale artist and friend Mark Neil, as well as a few other artists, offered to begin work on the art side of a modern port for Mystic Searches while a few students began constructing an iconic game location in Unreal. We had proof positive of a device that could write and read data from the NES's controller 2 port to link modern and classic versions of the game. So many things, all at once....
Mystic Searches prototype broadcast nationally on The Goldbergs...for 1.4 seconds!
A 3d render of an older Julian
Video of what the modern component of Mystic Searches may have looked like
Our controller 2 port interface working!
At around the same time that we were trying to capitalize on this excitement, things started to break down in our game's engine. First, our *static bank* was running out of space. We needed to spend time migrating routines and re-linking them to free up more space, which led to all sorts of gremlins. Another huge problem is that we noticed we couldn't get anywhere near the number of screens we wanted for this game with the existing way that screen data was structured. Due to the way our code was written, it truly resembled some ugly spaghetti. You know when you have a massive ball of christmas lights all tangled up, and you wonder if you should just throw that ball of lights out and buy new ones rather than spending the time unraveling them? That's where we were with the code.
Then it got worse. Our efforts to raise funds for the modern component of the game with these amazing artists fell short. The guy who was working on our peripheral disappeared off the face of the earth (to this day, I have no idea what happened to him...he seems to have completely disappeared). Erin, our tile artist, moved to California and became engaged in other projects. And our 3d animator, Mark, died in a tragic car accident. These things combined were devastating to morale and progress.
This was July of 2015.
I took some time to refocus. I made a very hard decision to essentially start the engine over from scratch, and to focus the entire effort of the game development on what we could do for the NES, forgoing the modern component. Now, it wasn't completely from scratch - many of the routines I'd written could directly translate. But I needed to step through the hundreds of thousands of lines of code section by section to find problems, optimize code, and find ways to compress resources to get more screens in the game.
One of the major changes to increase the amount of screens was to change to meta-tiles rather than tiles. What I mean by that - instead of making and loading large tables of 8x8 pixel tiles, we made *assets* that were 16x16 pixel tiles instead. This worked relatively well for most graphics. For instance, bushes were generally 2 8x8 tiles tall and 2 8x8 tiles wide, making them perfect for single 16x16 tile values.
4 8x8 tiles versus 1 16x16 "metatile"
The graphics for this take up the same amount of space in the ROM and in a tileset, but instead of a routine that loaded four values to make up this bush, I wrote a routine that looked at every other tile horizontally and every other tile vertically, giving the value of the top left corner of each tile, and filled in the three other tiles with the three corresponding tiles from the tileset. This allowed for us to draw a screen with 1/4 the number of values, giving us 4 times the number of potential screens.
Unfortunately, it also added a major issue. It forced us to have to rethink all of those awesome tilesets that Erin had created. Here's an example of the problem:
8x8 tiles versus 16x16 metatiles
Using the same tileset, you see how we are able to make a 3-tile-wide roof. Many of Erin's assets were based on this - a left corner, a middle piece, and a right corner, which is an excellent idea for modular creation, as you could iterate the middle piece as many times as you want to get different sizes and shapes. However, in upping to 16x16 metatiles, you can see we can't recreate that roof. We would need a whole tile that was "Middle and right", and if we wanted to make it variable in size the way that the 8x8 tiles could've been, we'd have to use up a whole other 16x16 tile for a redundant "middle tile". Like this:
Making 16x16 pixel modular tiles
If you think about that roof being broken up into it's 8x8 pieces, it takes up twice the amount of tileset space, meaning we could really only have half the graphics with half the fine detail. So once again, it came down to weighing options. We could get more detailed graphics with significantly less screens, or less detailed graphics with more screens. We ended up choosing the latter, and had to rework both much of Erin's graphics and our routines for how screens were loaded.
At the same time we realized that in order to speed up production, we'd need to create a suite of development tools. Instead of seeing the values in long hex tables, altering the tables, altering the collision tables, then exporting, testing, and making sure we had it right, we needed to actually see and edit what we'd see on screen and be able to test on the fly.
As you might imagine looking at these two images side by side, being able to have the instant feedback, see ghosted images of surrounding screens, apply collision at the same time as tiles, change assets on the fly, adjust palettes...all exported to the right places increased our ability to quickly create our world exponentially. The tool was still creating the hex tables, but instead of having to type values in digit by digit, we were able to just see how it would actually look in game and it would write in our changes and updates. (**MORE ON THE TOOL IN THE NEXT SECTION**)
By winter, we had a solid working version of the engine, and were pretty close to feature lock. We had our relics working, we had multiple spells working, we'd worked in our *lute playing* mode, which we'd once considered would be impossible to do, we had much of the music and sfx composed, we had varied monsters with a whole slew of normal top down monster behaviors, we had about two dozen different tile types working (things like warp tiles, jumpable tiles, breakable tiles, slashable tiles, monster spawners, music-secrets, etc). We had dark rooms and boss and ambush rooms. We had a series of items that did various things. We had all HUD stats such as strength and defense working. Our experience system allowed a player to level up. We had a simple shop system in place. The extreme majority of what would be the engine for Mystic Searches was finished.
But we knew it would take us a long time to actually create and perfect the world. So as a bonus, and as a great method to beta-test the engine, we created a playable beta that acted as a prequel during the 48 hours of the 48-hour global game jam in Jan of 2017. Most of you have probably seen this update:
At the end of the 48 hours, we found some significant bugs in the engine, and set out to correct them. Unfortunately, we ran out of memory in our static bank. I spent about six weeks optimizing code, moving routines, and fixing the bugs once enough room was freed up. Again, it's this sort of memory management that has taken the majority of programming time over the last two years...the simpler game that we'd originally planned would not have required this and likely would've been done a long time ago, but we are ecstatic with just how much functionality we've been able to pack into the a NES cart.
We made this prequel beta free to play via a link on our website, and encouraged bug reports and feedback. The feedback was incredibly positive, but users did continue to find, and help us to fix, small bugs. For instance:
Much to our surprise, supporters of the project advocated to get Mystic Origins, the beta, on a physical cartridge. Since the work was already done and it didn't take much time away from development of Mystic Searches, we made a run of 100 "Official Beta Cartridges". We sold out of them in minutes! Copies of the official beta tester cartridges were shipped to addresses all over the world. Even though it was only a 10% vertical slice of what we intended for Mystic Searches to be, it was a fully playable game for the Nintendo Entertainment System, and even as 10% vertical slice, was more involved than we possibly could've hoped to create when we started the project in 2014!
Mystic Origins, The Yellow Cart!
Through June of 2017, I continued to tweak some of the mechanics that we'd wanted for the game, but didn't put in Origins due to them being a bit glitchy. Much of the focus went into building comprehensive platforming mechanics into our topdown game, into developing a system for background animations, and into getting routines for flash saving. Unfortunately, with so much ROM space used, every thing I wanted to add or perfect required me to do more code optimization. Still, here's a demonstration of platforming in our top down game working intuitively, and background tile animations to create these waterfalls!
The rest of the summer was spent tweaking the mechanics and world building. Here is how the overworld for Myrinda, the Avalon like world of Mystic Searches, came together:
That just about brings us up to now. So what's next? Well, now it's a matter of doing on a large scale what we did for the beta...we go through area by area and introduce the narrative, use the tool to build NPCs, monsters, bosses, and puzzles, followed by a lot of playtesting, and inevitably some bug fixing and recoding. Then, we'll have to define exactly what data will be saved and restored (the save/restore routines already exist, but right now it's all just initialization data being loaded). There are at least 2 other songs that we have to work in even though we're running very low on space for music, and we have to figure out how to squash the story down so all the text fits in our available text banks. We will use the remaining space to include a few fun music easter eggs as well (songs you can play on your lute that have screen effects).
So while there is still work to be done, that's where we are with the game. We are in its final stages, but because of the nature of things still to be done, I don't want to give a false release date. In the meantime, we have decided to release Mystic Origins on standard gray cartridges, with box and manual, for anyone who still wants that adventure!
Gray Cart, Box and Manual coming soon!
Part 3: NESmaker - tools to help you build your games
So, in the beginning, the prospect was simple. We wanted to create a documentary, a game, and as we learned, leave a trail of breadcrumbs for like minded people to follow in order to help them create their own games. I can't stress enough that when I started this, I knew literally nothing. I filled up graph paper notebooks full of notes for the first few months, and constantly was rattling the brains of people on NESdev and NintendoAge forums to get info. Fortunately for me, they were eager to help and incredibly patient, and eventually, I came to learn a lot.
But it's one thing to understand the basics of developing for the NES and a whole other thing to express them in a succinct (and interesting) way. I have attempted workshops, at colleges like University of Baltimore or in conjunction with local game stores. Giving thorough explanation, it takes hours just to get something happening on the screen. I first started making a series of video tutorials like this one:
I made four of these videos before realizing that it was a terribly inefficient way of presenting the information. So I tried shifting gears, creating a walk-through on a friend's really cool tutorial site called Squareknot. This tutorial is a step by step version of what I generally teach in person at workshops.
But again, this too was full of such thick information, and so laborious to write...it took forever to make the slightest thing happen on the screen, and time creating these tutorials were time away from working on The New 8-bit Heroes film and the Mystic Searches NES game.
It wasn't long after this that we started to develop the NESmaker tools. At the time, they weren't really meant to be a single set of tools...there was a screen editor and a object animator and an asset builder. NESmaker as a large concept happened quite by accident. Here's the inside story of how it accidentally became a thing...
We had flown to Seattle to showcase the project at Seattle Retro Gaming Expo. The night before the event, while tweaking the game engine (never do that...it's the dumbest thing you can do), the game broke. I spent all night trying to fix it, and couldn't quite get it together. We were left with nothing to really show after having flown thousands of miles. So, we set up our development environment and allowed people to create screens that they could then load into our custom emulator (designed by NES guru Shiru) and then play them in real time. People loved it, and the continual question was "Where can I get this?!". At that point, we hadn't even considered any release of the tools. They were just our in-house method of mining for data without having to spend additional time in 6502 Assembly language.
Here's a short video (a few months later) of us promoting the GAAM event in Jacksonville with footage from SRGE.
So we started discussing the prospect with our tool developer, Josh. We began combining the tools and working on the interface a bit. He would tweak the tool, and I would tweak the Assembly language under the hood of the engine to support his changes. Before long, the tool's capabilities exceeded our Mystic Searches needs and it truly became a faux object oriented development environment for the NES.
As of this moment, the tool allows you to build top down adventure games for the NES, with a lot of flexibility, without ever having to write a single line of code. It allows you to flash to a NES cartridge with a single click. We've watched people at various conventions build what look more like beat-em-up levels, screens that function more like simple platform levels...it's amazing to see how even in its current form the tool can be bent and broken to do some pretty diverse things!
Guest gathered to make screens at Emerald Coast Con
Here's a long look at how the tool went from dumb in house concept to something much more robust. This is where the tool was a little over a year ago:
There is so much more that we want to do with this tool. We already have full design documents created on ways we'd like to expand it. Input editors, genre modules that load optimized code for particular genres, a piano-roll like music editor, and even simple code editor that will expose the assembly code for things you'd be likely to want to change in a way that won't break the rest of the engine.
In order to do this, however, we'll have to hire on Josh, our tool developer, full time. We intend to launch a tangential crowdfunding campaign to support our ability to make these extensions. Right now, the tool is pretty capable as it is. We plan on expanding the tool as far as the community wants to see it expand!
Now, let's talk about some of the grumbling that I'm sure we're going to hear. Some people will likely begin arguing that lowering the access point to develop for this system will dramatically increase the number of *crap* games there are by homebrewers. It'll flood the market. You know what I have to say to those people? Video games are no longer an interactive toy made by software engineers for profit. They have become legitimate forms of artistic expression. They have become storytelling mediums. Everyone should have the opportunity to express themselves through this medium. And, in fact, they can. Anyone can learn GameMaker or Unity or RPGMaker or any one of the various softwares geared towards storytellers and asset creators (and NOT programmers) in order to tell their story. Many of the people who begin down the path of learning the NES get tripped up by the necessary programming side of it, and end up burning out or jumping to create their project in a more intuitive environment like GameMaker. This begs the question - how many great post market NES games have we missed out on simply because there is a high threshold from a programming perspective to create NES games? Programming is, of course, important in game development, but as modern development has shown us it is not the only metric of what makes a compelling game experience. Case and point, how many games today are modified experiences based on the same fundamental game engines? And programmers aren't always the best game designers, either. They're not always the best artists or the best storytellers or the best musicians. So what if we created a tool that put the power of developing for the NES in the hands of everyone interested in creating for the system, not just those with aptitude in programming? This is the basic mission statement of NESmaker.
But don't get me wrong, I absolutely want to see young developers also learn 6502 ASM. We want NESmaker to be a springboard to game designers to want to learn more about what's under the hood, to be able to study the underlying code and make changes to optimize and better realize their unique ideas...just provide them a running start rather than dropping them in the middle of a very large desert and saying "good luck".
I also still plan on a tutorial series on ASM, with one major difference from how I've proceeded in the past. I want to try to tackle the very dry content in an entertaining way. This will make it both fun to watch, AND fun to create. So I'll be starting a YouTube series called NESmakers, and I'd love for you all to subscribe and follow the progress! Here's a first teaser:
EPILOGUE: A summation.
A quick breakdown of this epic post:
The film is complete, finished with its festival run, reviewed strongly by both film critics and audiences. We are going through the last bit of legal considerations and pushing for distribution in early 2018.
The game world is finished, and right now is the long process of working in the narrative, testing, and fixing bugs that inevitably crop up.
We plan to launch a crowdfunding effort to make the NESmaker tools even more robust and allow you all finally your chance to create NES games without learning any programming! In addition, we'll be launching a few YouTube shows, including NESmakers, which will see me, JGV, a host of guests, and a ton of random, comedic circumstances as we try to show how NES games are created.
Hurricanes, world building, crossing oceans...oh my!
over 8 years ago
– Wed, Oct 04, 2017 at 10:53:30 PM
A few project supporters have asked what happened to September’s update. We regretfully missed the update we had planned, but it’s completely fair to say that the omission would legally be declared as an *act of God*. For those that aren’t aware, the home base of The New 8-bit Heroes is in Sarasota, which was pretty much a bullseye for Hurricane Irma impact. Decisions of whether to leave home and brave the nightmarish traffic or to fortify inside the house, prepping and securing the house, scavenging for supplies, extended loss of cable, internet, phone service and power, clean-up and aftermath…it was quite an unforgettable experience.
And of course, like the jackass I am, I filmed a lot of the prep with my iPhone for your enjoyment…presented here in this slammed together montage. I was posting these videos in real time on social media, but I have to remember some of you do not follow us on those outlets. So if you missed these, here is what I shared from *storm central*.
Next week, I will be traveling home to where this all began…the house I grew up in, the forest where the seeds of ideas for Mystic Searches first germinated…and it’s there that I plan to do a significant update on all three elements of the project; the film, the game, and the resources. But since the Irma video was pretty much void of The New 8-bit Heroes content, as a teaser I humbly submit this time lapse of months worth of world building….
Lastly...we are officially international. Tomorrow night, The New 8-bit Heroes opens the Nottingham International Film Festival in the UK! These are exciting times, friends...check it out!
Why does it take so long? A look into issues when developing a complex NES game...
over 8 years ago
– Sun, Aug 06, 2017 at 03:20:41 PM
Prologue:
It's been a constant (and completely legitimate) gripe or concern with this project - "Why is it taking so long? Is the project ever even going to get done? I'm losing faith. The creator isn't even taking it seriously..." etc, etc, etc. Not that a lot of people are saying this, but every once in a while, I get the rather soul-stabbing message where someone puts into question our dedication to the project, and/or slights the constant, daily effort that goes into this game.
Again, I'm not saying that the concern isn't warranted. We are far past our expected delivery date. But I have attempted at every turn to quickly react to concerns, to be transparent about progress, to show updates, and even took a little time out to generate the beta to demonstrate just how far we've come while offering something to satiate those that were passionately and impatiently waiting for their new NES experience. I'm not sure what else I can do...I can not speed up the process.
But I digress - I'm not posting to be defensive. I use this prologue only as a primer to launch into a fun behind the scenes analysis of a not-so-fun series of connected examples as to why it takes so long to develop for the NES, which occurred this past week. I'm going to talk about limitations, delve into code logic, and get all mathy for a moment. To keep it as accessible as possible for even non-programmers I'm going to try to use some colorful metaphors and lots of images. I hope you all enjoy what will likely end up a long, hopefully fascinating anecdote about NES programming. Here goes...and understand that as long as the 'novel' I'm about to write is, it's still a ridiculous oversimplification...
Chapter One: Map Tragedy....
Many of you are very familiar with the fact that in order to create Mystic Searches, we had to develop a set of software tools (a lot of you are even waiting for your turn to play with them!). Why was this important? Well, compare developing for NES to working in a modern game development environment...let's say you're an indie studio and working with Unity, Unreal, GameMaker, or some other robust tool kit. You want to add graphics? You open up photoshop or some other comprehensive image editing software, you create the graphic, and you bring that graphic into the environment as an asset...most times, just "load image", select file, have access. These engines and development environments have pre-existing code that handles things like drawing hte graphics, multi frame animations, assigning those graphics to game objects, colorizing the graphics, etc. But that's not how it works with the NES. Not at all.
Here's our engine's method as to how to get object graphics on the screen for the NES.
Now, keep in mind, these tables, that animation code...that's not how *NES* works, per se. It is how *we* have organized our data, and the code routines *we* have developed to use that data to work with how the NES does work for the desired effect (drawing animated objects to the screen). Every game is potentially dramatically different in how it handles this.
So imagine, if you will, what it's like to create this sort of animation. In our game, a character can have up to 8 action types, in all 8 directions (up, down, left, right, and diagonals). So that's 64 possible types of distinct animations, each with four frames a piece...that's 256 different potential frames for each object, each which has its own attribute data byte (what color, is the tile *flipped* or not) and each object can be arbitrarily wide or tall (our hero, for instance, is made up of 6 sprites...2 wide, 3 tall). So for our hero, that's potentially 3072 values that can be defined in a giant table in order to create his animations. That's...just for our hero player. That's 3071 hex or binary values we'd have to write into the table, one at a time, to create a full animation set.
This is essentially what would happen in our head for every tile of every frame of every direction of every action type for every type of object as we wrote things byte by byte...
Imagine the time it takes to do that over 3000 times...for one single object (and granted, some objects don't make use of all their actions or directions, you get the idea...)
But now compound the problem. Every time we want to actually see what the object would look like (and make sure we put in all the values correctly) we would need to edit the binary code of the screen where some of the bytes determine which objects get loaded. Then we would compile the game and check it in an emulator, just to get a positive test of what it actually looked like in game and see if we entered all of the values were correct. We actually did it this way for a while, believe it or not.
And imagine what it was like the times we had to change the tileset...and go through all of those values and change all the bytes to the new correct values, and testing to make sure we got them all....
Again, all compared to a modern tool, where you create the graphic, load it in, assign it to object, and away you go.
This is an example of why NES developers (not just us, but anyone creating anything of any level of complexity) create tools to mine for this data. Here's a look at our interface for creating object data:
This handles all the big dumb table creation in a visual and intuitive way. WE don't make the tables, the TOOL makes the tables. We just create in the tool.
Here we define how wide and tall the sprite is, what palette it will use, what pattern table assignments are used for each tile slot, for each direction, for each action type (and also, in the deeper menus, it allows you to define all types of things specific to our engine, like strength, speed, hit points, bounding box, etc). You can define these values in a way that gives you instant feedback, and it automatically creates the table, puts the table in the right place, so that when you compile the code, our animation engine from above looks at the right places to get the right data. To put it simply, we never have to write code to create objects ever again. This part of the tool generates the bytes while allowing us to see what it will look and behave like in real time.
Objects aren't the only thing the tool handles. It handles all sorts of things, like initialization data (where your player starts, what inventory he has, etc), screen data, what song plays on what screen, what monsters are on what screen, NPC dialogue, a ton of other things, the most important of which is screen generation. If you understand how obnoxious creating character animations byte by byte would be, imagine creating an entire map the same way...all of the graphics shown, byte by byte, in 8 pixel increments...then all of the collision data, byte by byte, in 8 pixel increments...compile, test...oops! tree is in the wrong spot! Looking through thousands of lines of code to find the hex value for the three in a haystack of hex values, changing it, compiling and testing again....yikes!
It's truly the creation difference between trying to conceptualize what the screen would look like in your head and writing it out as long strings of binary values (what the NES actually understands - speaking 'its' language) versus actually seeing what the screen would look like (seeing what the PLAYER actually sees, and having the TOOL translate it into what the NES actually understands).....
The two images shown above are actually the exact same thing. The table of bytes is the actual assembly code generated by the tool. Conversely, the screen image form the tool is the image that generates the bytes in the byte table. They are exactly the same. But, of course, below is MUCH easier, faster, more intuitive, and honestly more fun to work with!
And as our engine became more and more complex, the dependence on the tools became more and more significant, because the complexity of the data that we needed to mine increased exponentially, and the long strings of data got more and more unruly to look at in text editors. We continued to modify the tools to adapt to new needs that cropped up, new possibilities that would extend what we could accomplish, new ways to optimize data to squeeze more stuff into the cartridge, etc. Every time a change was done to the assembly language that fundamentally changed how the game worked, changes had to be done with a tool until they worked together (and vice versa). We worked with Shiru, a dude who has a long experience of making awesome tools, to integrate an emulator into the tool, so you can one-click deploy to test changes. We worked with Infinite NES Lives to have one-click flashing to cartridge using their cart flasher.
Unfortunately, as the tool is also a complex piece of software in its own right, it occasionally suffers bugs and glitches. And last week, a glitch happened at the worst possible time. While saving out my current version of the map, the tool glitched and threw up an error. The map had not finished saving, and this version of the map was not an iteration...so literally, it overwrote the majority of the map with null values...if you didn't get that jargon, the save data was erased. Imagine working on a microsoft word document and having the save data stop after the first sentence, erasing the rest of your twenty page thesis. That's essentially what happened.
Tragedy.
However, maybe not as terrible as it sounds for a few reasons. I have exported a PNG of every iteration of the map, so I had the world designed, which is honestly half the challenge. And it's not like I'd have to start all over from the beginning...I had been iterating (saving-as) along the way. So I only lost about a week of work. It also helped us track down the bug, build in some failsafes to prevent that particular save-crash (which had never happened before) from every happening again. But it was heartbreaking to lose about a week's worth of screen design (and really, by not iterating my saves, I have no one to blame but myself, which just made me more frustrated).
So that's how this started...with a map tragedy. I wondered if I could turn a negative into a positive.....
Chapter Two: Facing the limits...
So knowing I'd have to recreate some screens, I wondered if I could improve on the design. There was a silly idea that I'd been flirting with. It'll take me a moment and a bit of time to explain, so stay with me.
One of the main, easy to identify limits of the NES is it's very limited color usage. A lot of people understand that the NES can't reproduce a lot of colors. In fact, the following are the only colors (ostensibly) that the NES can create:
Now, there are some little tricks where you can get more range out of this, by adding intensity to either red, green or blue...imagine it like putting a red, green or blue filter over a lens.
But the limitation goes much further than that. Let's pretend that you were making a game in a modern tool...something simple, like GameMaker, and you wanted to try to emulate the NES aesthetic. The first thing you might do is limit yourself to the above colors. But that wouldn't go nearly far enough.
For each screen, you'd have to choose four groupings of four colors for the background, and four groupings of four colors for sprites (things like players, monsters, items, etc). I always refer to these groupings as sub-palettes.
(to the right? oops...I mean to the left...)
And to take it even a little further, you can only create 16x16 pixel areas using a single grouping of colors. For instance, using the above palettes, I can create a background tile that is black, light brown, dark brown, and green, but I can't create one that is black, light brown, blue, and white, even though all of those *colors* are loaded. The tiles can only be created within those groupings of four colors. Lastly...the first color in each grouping of four is always the same as the first color in the other three groups...it represents the *transparent* color. For us, it's almost always black.
So when designing assets, you have a few choices to maximize your colors. One - make everything squared off. Make every asset completely square. Each square uses 3 distinct colors + black. So your tree could have completely different colors than, say, your ground (this tree has a lighter color leaf than its ground would be in the next few examples, for instance). That probably gives you the most color depth, but kinda looks crappy because...squares (this is also a craptastic quick non-real squared off sprite just to give you the idea).
But think about what happens if you want to round edges - if you want to round an edge on a tree graphic, that means *behind the tree* has to be green (ground color), so you'd need a sub-palette that includes the ground color, the tree color, and probably brown for a tree trunk...so that's it. No apples for your tree. No multi-tinted trees (unless you can use the same ground color for one of those tones, like seen here. Notice, no *light green* for leaves possible...).
But what if you NEED to have apples on your tree? Ok...they can be on the top half. You could make one sub-palette that is green and red and tree green...and the other half uses a different sub-palette that is green and BROWN and tree green. Apples could be on the top half, and the tree trunk could be on the bottom half. Unfortunately, though, now you've used two of your sub-palettes up...making your screen less colorful. (also, quickly drawn in apples, non real sprite art to demonstrate the point).
You could also just go completely abstract. It is NES...you can get away with fully red trees or monochromatic, all green trees (trunk at all). It ends up looking less visually interesting, especially considering our modern sensibilities of what *retro games* look like, but it'll still PLAY the same.
We tend to use almost predominantly that method of rounded edges, even though it does often mean redundancy across sub palettes (for instance, almost all of our sub-palettes will contain the ground color). If you look back up at our screen compared to its palette, you can see how carefully you have to plan your color layout compared to your tile design in order to get something that really conveys visual interest. For instance, it would be great to have a lighter pink color as the bright spot for those crystals (instead of the light green), but then we'd have no room for the light green of the trees. Or, we could toss the pink over on the water sub palette instead of the *ground green*, BUT, then our water would have have to be square, because the ground around it would be pink...
But now what about that last sub-palette? The one with the red and blue and white? None of those colors are seen in the screen. So why can't I use that last sub-palette to get even more color?
Well, for us...those colors are used in the menu bar, which is loaded on every screen, so it has to be the last sub-palette for every screen palette.
But man...what if it DIDN'T. What if there was a way to load in blue and red and white for that last sub-palette, draw the menu bar, and then at the end of the menu bar, update the colors in the sub-palette to whatever I wanted them to be? Was it possible? I could get three more colors per screen! And while that doesn't sound like a lot...consider that it increases the color possibilities by 25%!
And it's not just about aesthetics. Yes, it would be great to be able to, say, have two toned brown wood, two toned gray stone, two toned blue water, AND two toned green trees all on the same screen with a common *ground* color...it would look fantastic. But what about for times when the color matters to game mechanics? Like - what if i wanted brown paths which were walkable, blue water which you fell into, and red lava which hurt you all on screen at the same time? Along with rocks / trees / stone? Without that extra palette, it was sort of impossible...I've had to pick and choose what features a screen had based on what colors I could use to represent them, and I had to make mostly redundant palettes...this screen has water and paths and trees, that screen has poison swamp and paths and trees, that screen has water and paths and poison swamps but NO trees, etc.
So...would it be possible? I started to dig in, and actually found a method to make it happen! I wanted to turn the negative (losing the screens) into a positive (making the screens even better!).
If you think what I've explained above sounds laborious and overwhelming compared to how modern games are created...that's nothing compared to what's involved in trying to split the screen and change colors out mid-frame.
Chapter 3: Defying the Limits!
I was vaguely familiar with a trick that was used to maintain a static menu bar in a scrolling game called the Sprite 0 hit trick. It's actually pretty straight forward. In the simplest explanation, the first sprite to be drawn to the screen is sprite 0. The NES looks for the place where sprite 0 first hits a non-0 background pixel, and when that hit occurs, it sends a message. It yells "hey now! There was a hit! Wanna do anything?" At that time, you can say "yes! I want to do things!". Looking at how Super Mario Brothers handles its menu while scrolling is probably the easiest to understand and most common example.
See how the screen scrolls with Mario, but the menu bar stays in the same place?
Here's what's really happening:
This is a side by side comparison of the same screen of Super Mario Brothers, one with the background and sprites turned on, and one with only the sprites turned on. You can see that there is a tiny little sprite up in the menu bar that almost looks like a leftover piece of background tile. That tiny sprite is the bottom of the coin...this is the point of intersection, where the NES identifies the sprite zero hit. In SMB, the game confines the scroll value to 0 for the menubar...when the sprite zero hit happens, the rest of the screen is drawn with the scroll value set to however far Mario is in the level (essentially speaking). What's before the sprite zero hit stays still, what's after is moved however far over Mario has progressed into that level.
Now, Mystic Searches has no scrolling. I'd never had to really make use of the sprite 0 hit for anything so it was sort of foreign territory. But...my hypothesis was simple enough. Couldn't I just do the same sort of thing, but instead of setting the scroll, set three new colors to overwrite the menu colors? Load in red blue and white in the fourth sub-palette...until the sprite zero hit...then change them to whatever colors i needed for that particular screen? Genius!
Day 1: I spent a day really digging in to how the Sprite 0 hit trick worked. I confirmed what I knew, and began writing some routines that would draw sprite 0 to the edge of the menubar, and then change the palettes. When I finally got the code written and tested, the game simply crashed out. It froze on an all gray screen. I was all too familiar with that...it simply meant that the game hadn't loaded at all.
I knew in sort of a passive way that some emulators had *timing issues*, and I knew sprite zero hits were related to *timing*, so I figured I'd try it in a few other emulators. Of course, finding another emulator that supported the mapper we're using took a bit of time. I'd download the emulator and try the last known working version of the game on it just to see if it was a viable emulator for a second test. I had to work my way through two or three emus. Finally, after finding one that did load the game, it also gray-screened me. Wasted time...it wasn't the emulator.
I tweaked and massaged the code to no avail. After a lot of trial and error, my best hypothesis was that the routine was doing the check for sprite 0 BEFORE sprites and/or backgrounds were being drawn, meaning it never found sprite 0, meaning it was crashing due to being stuck in an existential loop to find meaning and purpose! It was hanging on the command to wait for sprite 0 hit because there WAS no sprite zero hit, because neither the sprites nor backgrounds had been drawn yet in order to have hit.
Day 2: "And on the second day, the programmer did create a routine to bypass the sprite 0 check if both backgrounds and sprites were not yet drawn, so it would only run that code if it was possible for a sprite 0 hit to happen. And it was good."
Or at least, it seemed good. Did it fix the problem? Unknown. Because...memory. The game could not compile to test the new code, because the new routines overflowed the memory.
For an example of what I mean, here's a quick video about memory management for the NES...explained with beer. Some of you may have seen this embedded in a previous update...and might already know why I consider it the most loathsome part of developing anything complex for this particular system.
I spent much of the second day trying to do this little swapperoo with routines and banks and a little bit of code-economizing to make some extra room in my static bank, and then fixing all the bank references, correcting one new messy spaghetti-code bug at a time, so that I could run the cool new routines I'd written and test things out. After a bunch of annoying, brain bending think work, I got the game running ok (at least, so far as I could tell). So I re-introduced my sprite 0 hit check with the caveats of when to ignore it.
Annnnnnnd...failure...sort of. This time, the game loaded, but on the first screen that would've checked for sprite 0, the scroll went haywire and THEN the game froze. This was bizarre...i hadn't told it to do ANYTHING yet, other than check for sprite zero...whether it saw it or not, just continue on with the code. It shouldn't have done anything yet, let alone mess with the scroll (which I wasn't even referencing yet, nor did I plan to!) Argh!
I stepped through the code with a debugger to try to find where the problem might have been. New problem, new hypothesis. There were certain frames that, for various reasons, I did re-writes of the sprite data. At the start of these frames, I blanked all of the sprites, and then rewrote them - basically I did this to clear sprites that shouldn't be in the game anymore. Also, this had to happen for my object depth routines to work. So what must have been happening was that in one of those frames where the sprites were blanked, there could never be a sprite 0 hit, because sprite 0 was also blanked (no longer being drawn to where it would collide with a background), causing the game to go into full lockdown infinite loop mode again!
Day 3: On the third day, I attempted to find all of the writes to sprite data, and told it to absolutely ignore ANY writes to sprite 0, and hardcoded a sprite 0 write to the position I needed it to be twice per frame...at the beginning, and again at the end. This was just for testing purposes...it wasn't a feasible solution for the actual game, but at least it could get me to a point where I could test my sprite zero it. But it STILL gave me issues, and my hypothesis was I hadn't gotten to all of the cases where sprite 0 could have been drawn off screen, or that some other piece of code was trying to draw sprite 0 elsewhere.
Rather than continue to battle to find it, I instead opened up a very old version of the game. It was a version that had the same foundation as the current version, but it was before I added any functionality beyond screen loading. I did this so I could see if the issue was some conflicting code that I just hadn't found, or if my routine was flawed. I iterated that old version and scooped out all of the guts. All that remained was a title screen and a check for controller input to go to the next screen (so I could make sure the game didn't freeze). No sprite drawing. No object management. No gameplay what-so-ever. This would be my proverbial empty canvas to get at what I wanted to do and make sure at least that code was sound (I still couldn't confirm if the code itself was the problem, or if the problem was in some conflict!)
I still had to insert all of the new caveats - checking to make sure sprites and backgrounds were loaded before running the check, and then making absolutely sure that sprite 0 was being drawn every frame in the right place (and since I'd removed everything else, it was the only sprite being drawn anywhere).
Finally, a victory.
Victory? After three days of working, I managed to make it *not crash*. That was victory. It still wasn't doing anything. Anything at all...it just...didn't crash.
Day 4: Finally, I was to a place where I could begin to try to make something...anything happen. The starting point was this old screen:
See where the sprite is placed? If you look closely, you'll see it at the top left corner of the C. Funny little graphic there. Ok - so here was the challenge. The "Demo Only. Please Pardon" uses the same palette index as the "Prototype 6.2 Retropalooza" further down the screen. The goal was to make the former stay pink, but the latter be any other color (I chose value #$18...brown...for no particular reason).
What I tried was simple - right after the sprite 0 hit, I changed the values in the palette. But all hell broke loose on the screen upon testing. A jumbled mess of flashing pixels and colors, wrong values loaded...something that sort of almost approximated the screen above...like it was identifiable, but barely.
Back to researching, and to asking questions from NES programming / computer science gurus.
By the end of the day, I found by my own devices one of the issues. See...in order to evaluate the sprites against the background, graphics rendering (drawing them to the screen) has to be turned on of course (otherwise, no hit, because the graphics aren't on to have hit). But to writes to the palette addresses to update the colors, which was the whole point of the thing, can only happen when rendering is turned off. Oops! Just two lines of code to turn the graphics back off prior to writing the new values to the palettes, but forever looking for that tiny mistake.
Day 5: Unfortunately, fixing that mistake still left me with a wonky mess of a screen. The good news was a combination of research and advice from NES developers brought my attention to glaringly obvious problems with my method.
The updates need to happen during a period called hblank. What is hblank? Well...how old cathode ray tubes worked is that a light move across the screen very very fast, then moved down a row, repeated, then when it got to the bottom, traveled back up to the top, and repeated. That's how the illusion of an image was created.
This whole process happens much faster than in this gif, and it goes line by line to the bottom before vblank (which is when the light travels back up to the top) but hopefully it gives you the idea. The only time to do the operations needed to change the palette had to fit inside of the teeny, tiny window of hblank that followed the sprite 0 hit (not arbitrarily after the hit, as I was trying to do...but specifically in that little window of time).
But the first issue with trying to make this all happen inside of the hblank was that the hblank didn't start immediately after the sprite 0 hit. Remember, THIS (image below) is where the sprite zero hit was happening...so I had to figure out a way to *wait until the yellow arrow went the distance of the yellow line and finally reached the side of the screen before playing with rendering and trying to write to the palette*.
So how the heck do you do that?.......
Day 6: More investigation. how can you literally count the time of a stupid scan line after a sprite 0 hit?! Well, what I learned was that each *cycle* of ASM code equals about the same as 3 pixels drawn. So...the sprite 0 hit was occurring 200 pixels into a 256 pixel wide screen. I had 56 *pixels* until the edge...meaning I had 56/3 = 18.6 cycles to fill with dummy information.
Each instruction in assembly language takes a certain amount of cycles, the smallest being 2. So I had to write 19 cycles worth of junk instructions between my sprite 0 hit and hblank. Like...I had to turn my code into the equivalent of Christopher Walken giving a soliloquy. "You need to...wait....for AAYCH blank...."
I learned about an instruction NOP - it's simply a null, blank instruction that does nothing (like those ellipses in the immortal Christopher Walken's delivery), but takes 2 cycles. So I could NOP 10 times (which would be 20 cycles...I needed 18.6) and be inside hblank, where I could finally safely write to the palettes.
I did this, but it didn't quite work. It was closer, but...things were still messed up. There was a huge flickering line of pixels, and something funny had happened with the scrolling again so the image was all misaligned...like this:
But holy hell...do you see?! Do you see the brown text! That was proof positive that this was working! I successfully changed the palette index...the same *magic marker* was being used to color the pink text at the top and the brown text for the rest of the screen due to this routine! After 6 days, I was finally seeing a little bit of evidence of the fact this method COULD work.
But the flashy glitched pixels and the weird scroll...why?!
Well, remember how I said hblank was super fast? Really short? And remember how I said I could only do all the functionality I needed to do inside of hblank? Well...there isn't enough time in hblank to actually make the palette changes!
Dealing with cycles, I learned that the hblank only lasts 28 cycles. That's barely longer than 'Chris Walken's ellipses' from above! But in order to do change the color, I had to do the following things:
Turn off rendering (10 cycles)
Get the memory address for the palette index I wanted to change (16 cycles)
actually CHANGE the palette (write new values to those memory address (22 cycles)
fix the scroll (12 cycles)
Turn on rendering (10 cycles)
That's...70 cycles. That's almost three times as much functionality as I can possibly cram into that tiny little hblank! It was...impossible....? it was like trying to cram the pledge of allegiance into one of Walken's ellipses. You know..."I have a fevah...(I pledge allegiance to the flag of the...)...and the only prescription..." DAMNIT! Fail.
Day 7: What if...now this is crazy...but what if I broke it down into manageable chunks....
WAIT FOR HBLANK 1
Turn off rendering (10 cycles)
Get new memory address (16 cycles)
(only 26 cycles...)
WAIT FOR HBLANK 2
Actually change the palettes (22 cycles)
WAIT FOR HBLANK 3:
fix scroll (12 cycles)
turn on rendering (10 cycles)
(only 22 cycles)
It would be like the Walken / pledge example...if it were written like this:
"I have a fevah....(I pledge allegiance to the flag)...and the only prescription....(of the United States...)...is more cowbell! (of America)"
The big trade off being that there would be a big black line across the screen for that second hblank wait, because doing it this way would mean that rendering would be turned off for that entire scanline. But you know what? The idea of this was to be at the bottom of a menu bar, separating an already black menu bar bottom from the playing field...so, who cares about a black line? One wouldn't even notice. No problem! I ended up moving sprite 0 down the page to connect with the BOTTOM of the C in Mystic instead so the black line would occur under the word Mystic...there, even on this screen, it should be unnoticeable.
So...on the seventh day, while someworld builders were resting, I spent the day making this happen!
The impossible had been achieved! Victory! Eureka! Look at that, I'd managed to, mid frame, change the palette so that I could isolate the menubar colors from the gameplay colors and completely maximize the color depth of Mystic Searches! I went to bed very happy, feeling very accomplished, feeling validated! I heard the Link-gets-an-item duh-dun-duh-DUN! in my head over and over every time I looked at it.
Then...
Day 8: I started to quantify how much work it would take to achieve the very same thing in the latest build of the actual game...and I got a little gun shy. Not long into the day, I got a nice message from one of the NES developers warning me that emulators sometimes had timing issues, and that they were completely non dependable. That I should test on actual hardware. But I figured it all out, right? I made it work! Mathematically! Surely, it would behave the same on the actual system. But it was worth testing.
So I spent time prepping the file and flashing it to an actual cartridge. Set up my top loader, and blew into the end of the cart with a prayer in my heart, just like in the old days...
The actual hardware test was this:
All of the hard work on getting that accurate timing and frame counting was all for nothing, because the visual representation on screen when I was testing wouldn't match the actual hardware...meaning I'd have to literally fly blind...continue to tweak, reflash, test on real hardware until it worked...and then always seeing it messed up on the emulator, with the knowledge it *should* work when I finally play on real hardware...
Despite that, I started to consider it anyway, then I got another nice message from yet another supportive developer who warned that turning off and on rendering and making changes to the palette mid-frame like this could also inexplicably affect sprite draws for the rest of the frame...weird flickers or jitters, things drawn in wrong places or in wrong orders, etc. Of course, I had no way to test that, because I was working on a version where I'd strategically disabled all sprite drawing except sprite 0....but that would not be acceptable under any circumstances.
Eight days. Lost. This was a great moment for me to delve deeper into what the NES was capable of and into what we'd be able to do with Mystic Searches, while at the same time summing to exactly zero progress for eight solid days of hard work, headaches, and confusion. I could probably continue down the path to get this feature to work, and it would be as cool as I want it to be...but how much longer do I invest in that thing? Knowing at the end of the day it might have unforeseen consequences and include all sorts of graphical gremlins into the game?
Epilogue
So...why does it take so long to create a complex, cartridge based, hardware playable game for the Nintendo Entertainment System? Consider this...the following video demonstrates me doing the exact same thing I spent 8 days trying to do (and ultimately failed), which is to introduce hud colors not present in the main gameplay area. Only in the video, I'm using a modern development tool. Where there is no counting scan lines to hblank. No memory management. No writing laborious code libraries to make intuitively simple things happen.
Multiply that discrepancy by the millions of little things that a game must do in order to function correctly, and it might start to give you an idea as to the order of magnitude more complicated it is to develop for the NES, and proportionately how much extra time it takes.
To be continued.........
Artist? Want to be part of Mystic Searches?
over 8 years ago
– Wed, Jul 26, 2017 at 01:12:07 PM
Hey all - so here's a fun thing. As we wind closer to the edges of filling up our game's memory (and optimizing a bit as we go), we're finding that amazingly there are some monster slots left to fill in Mystic Searches. We have plenty of unused concepts we could implement, but we've really enjoyed all of the critical feedback from everyone and everyone's input. We thought it might be fun to reward you guys - for everyone who always wanted to design for a real NES game, why not make that opportunity possible? I love that so many of you are as passionate as we are, so why not involve you creatively? Let's do a contest like those old Nintendo Power competitions!
Of course, we're not at a loss for ideas here...we have an overabundance. But I love the thought of using this as an opportunity for you to be part of this world's creation. If there is enough interest from our supporters, maybe we'll open this up into a thing. If you or anyone you know would be interested in participating, please like / comment on this update!