Game Development Principles
I joined the industry quite some time ago, and that time has given me the chance to work in studios that were on different stages. Most of my experience has been at big AAA studios or large companies, and the only experience I missed was the indie experience. I have worked on very established studios with multiple decades of history, as well as studios that didn’t have much more than a year (but established to do development at the AAA level).
Some of them had established cultures with certain explicit or implicit development principles, but other ones were trying to come up with those. Those are very different contexts because on one side the established studios have a lot of people that share a common history with shared development principles, while a new studio doesn’t have either.
But independent of the situation, you will have to contrast your own development principles against the established ones to see if the studio would be a fit for you, or your own development principles will help define them for a new team or studio.
You can let those development principles emerge organically, but you don’t know where those will end up. You would be lucky to get a good sustainable set of principles out a new team or studio that doesn’t have a shared history. And if you are joining an established studio, you will find the experience quite challenging if your development principles don’t align with the ones on the team. At the same time, bad principles can be detrimental on how people feel at work, which means that you end up with a team composition that tolerates or thrives on negative culture. Unfortunately, there are games that have been commercially successful and developed with poor set of principles (some of them clearly toxic), but those were successes despite those principles rather than because of them.
Over time all my experiences made it evident that the implicit and explicit development principles were key, and they highly correlated with the quality of the shipping game or tech. In early 2023 I saw 2 Player Productions’ excellent Double Fine PsychOdyssey documentary, and I also watched some interviews about it when it came out. An interview with Tim Schafer stuck with me where he said that one of the key takeaways of watching the documentary was that he needed to write down the values of Double Fine Productions. While he was talking about the values of his studio, I realized that there was something really useful about thinking and writing down my own development principles to avoid losing myself in the immediate needs of development. That led me to create a bullet point list on my phone where I started writing them down as a single line each. But given how useful that exercise was for me, I thought it would make sense to explain and share them with the hope that people would think about their own principles and perhaps share them as well.
Principles
An important disclaimer is that I don’t think of them or follow them dogmatically. I’m not dogmatic about them because context is extremely important, and because it isn’t a great idea to be dogmatic with a set of principles that are loosely defined. At the same time, a given principle might seem contradictory with another one in certain situations making dogmatism useless.
The other important disclaimer is that any mentions of priorities are not meant to be thought in an absolute way. If concept A is meant to be prioritized over concept B, it just means that A should get more attention than B, and not that all of the attention should go to A and none to B.
General Principles
Overtime is not part of the production timeline.
When I joined the industry there was a generalized culture in the industry that you had the privilege of working on the industry and doing overtime. In high school I would daydream about the heroic efforts of programming legends that worked 60-hour weeks as a baseline, and 80-hour weeks in the last months of development. Some studios would even romanticize that crunch culture, one such example being “BioWare Magic”. When I joined the industry the mindset was mostly the same, but the unsustainable nature of that mindset became clear quickly. A notable milestone was the EA Spouse blog post which painted the bleak reality of the industry through the lens of EA at that time. But while conditions have improved since then, the improvement hasn’t been reflective of 20 years going by since that post. There are still long timers of my age and older that are fine including overtime as part of the production schedule because “crunch is what we used to do in the 2000s, this isn’t crunch”. This creates a schism with the next generation of game developers that grew up watching us struggle with our mental and physical health as a result of crunch. So, if we want to have a healthy sustainable studio or team, we need to establish the principle that overtime is not part of the production timeline. Overtime should never be planned or accounted for, there should never be anybody saying “we’ll solve this during crunch”.
One responsibility that studios must take is to also make sure that a culture of self-crunching isn’t established. While crunch time imposed by the studio is terrible, if the team is engaged in the game they are working on, then odds are that some people will start a self-imposed crunch if they don’t believe that the time left is enough to ship the game. If we let that run its own course then odds are that a divide will be created between the people that decide (and are able) to stay late and the ones that leave on time, and we will eventually loose the people crunching because it is not sustainable despite how passionate they are.
I will avoid making up a theoretical example around this as the Double Fine PsychOdyssey documentary reflects the level of impact of this issue in a real studio.
Prioritize good opportunities for juniors over maximizing team seniority.
While just working with seniors might seem like an ideal scenario, it can be quite detrimental in the short term, and it will definitely be detrimental in the long run. Senior engineers usually deliver whatever is needed at the time, but they can also get in the way, and their work may not yield as good results as expected from the outside. A colleague from a well-known tech company said: “I find that only hiring seniors leads to a lot of issues. Ego trips, lack of questioning how things are done or just too jaded to change things up”. Another one from a well-known first-party studio mentioned “It feels like it’s going to be a huge problem in a few years when senior programmers exit the industry”. They both represent short- and long-term issues of only working with seniors. Adding juniors to our teams has big benefits. They usually come up with interesting new tools and approaches because they are not set in their ways like seniors, they can come up with questions that can challenge how things are done and help drive them in a different direction, and they have the energy to change things up.
But it isn’t enough having an opening in our teams for a junior with a reasonable salary. The opportunity needs to be good for them and for the team that is going to take them. That means that there needs to be enough team members, time and seniority around the juniors to help them succeed. It should be everybody’s responsibility to answer questions made by the juniors, but also let them get lost some so they can find solutions on their own. Don’t try to add juniors as a way to offload some of the work for overscoped seniors. Juniors will be setup to fail that way as seniors would be stressed with more code or content changes coming in, they would not have time to answer questions, and in general it becomes a scenario where the junior is set up to fail. Long story short, create a context where seniors can support the juniors so the juniors can succeed.
In the end, don’t forget that we all were juniors at one point, and someone had to take a chance on us. We should remember the most helpful seniors that came across our path when we were juniors, and avoid being those seniors that were negative about every new thing we proposed as juniors. When I joined the industry there were still seniors around that thought switching away from assembly was reflective of juniors and semi-seniors not caring for performance when it was already clear that it wasn’t the case. We should avoid being like those seniors, and we should aim to create a context where we can fulfill the responsibility of mentoring juniors.
Avoid cargo culting.
We all have our own development history where we have seen things succeed and fail, and we have felt comfortable with certain tools, workflows, processes or principles. But what worked fine in one context can be far from ideal in another context. At the same time, those that failed in the past may have failed because they were implemented in the wrong context. This principle isn’t a call to discard past experiences, but it is a call to avoid dragging those onto a different context just because “we did this in the past and we know it works”. Past experiences should help make better solutions for the context we are in, and people proposing a solution based on past experiences should be able to make a reasonable argument based on concrete evidence. But the merit of “we know this works” as an argument to do something is low by itself. After all, in any team the assessment of solutions and its results is usually conflicting. The argument instead should be based on why a given solution worked, why the problem is similar to a previous experience, and why it applies to the actual problem in the current context.
One of the most common and damaging examples of cargo culting that I’ve experienced relates to teams diverging a generic engine to make it fit what they were familiar with. When a team must switch engines for a game, or when a team worked on a previous title with a different engine, then if cargo culting is not avoided there will be a never-ending set of challenges. Having a different engine is a clear example of having a different context where old solutions don’t apply as they were. If we have a culture that doesn’t fight cargo culting then we will spend a lot of resources bending the engine to do what the previous one did, it will make it difficult to upgrade the engine, and the cost of most features will increase as the engine will fight us at every step of the way. If we avoid cargo culting then our focus will be on getting our game done with an approach that aligns and leverages the design of the engine as it is. At that point we will keep divergences to a minimum, we increase the odds that the cost of upgrading the engine remains manageable, and we also increase the odds of being able to use new features coming in with the engine.
Risk management over being risk averse.
If we asked any studio if they think they are risk averse, the answer would be a resounding no. No studio likes to be seen as risk averse. The need of studios to distance themselves from that is such that if we looked at their websites, in many of them we would find “innovation” as a value shown somewhere. But if the studio rarely reaches for solutions beyond what they know works, then we know that despite the messaging, that studio fostered a risk averse culture. What we know works shouldn’t be more than our plan B, our safety net. When the attempt to find an innovative approach that would produce a leap in quality or workflows fails, then we know we can rely on our safety net to catch us. That doesn’t mean that we need to innovate on every single thing. Putting risk on areas of low relevance or return isn’t good, which is why we want to manage risk. Part of putting the focus on risk management means that we are trying new approaches for whatever has a high impact on the game itself or its development.
In general, though there are some exceptions, I’ve found cargo culting to be linked to risk averseness. The need to cargo cult often comes from the perception that if you can’t use something that “we know works” then the assumption is that everything will go wrong. It is interesting that in some of those cases the amount of work necessary to implement that known solution was huge, but the risk averse mindset against solutions that are not familiar can be such that the work would seem justified.
Think in terms of probability rather than possibility.
This principle is connected to the principle "Risk management over being risk averse". If our team focuses on the possibility of something going exactly wrong, then odds are we will become risk averse as it is hard to design or implement anything with a 0% chance of failing. But if we focus on probability, unless the feature has the ability of causing a catastrophic failure, then as long as we keep the odds of failure low then we will be fine. Thinking in terms of probability gives us more freedom to look for implementations that are optimized for the most probable scenarios instead of detrimentally impacting those scenarios due to the possibility of something going wrong. While this principle might seem like it is only applicable to engineering, it also applies to aspects that influence game design in just the same way. If your studio’s experience was on making single player first-person shooters and the next game is supposed to be online, then as you pick a networking approach you should focus on having the best experience for your most probable case. If we focused on the possibility of something going wrong, then odds are that we wouldn’t take any networking approach that includes client-side prediction as they would mispredict badly in some bad networking scenario. At that point you will end up with a solution where the input response is universally poor just to ensure that some players with horrible lag and packet loss don’t see a misprediction. But if we focus in the most probable case, then we can contemplate client-side prediction as an option which would allow us to have a responsive experience for most players in most scenarios and would allow for a different game design.
Avoid siloing.
Siloing, gatekeeping, and factions tend to be endemic to studios that have a high level of internal politics and a generalized lack of trust. Those are not nearly as present or visible when you are a small studio (unless the top leadership fosters that), but as you grow the issues and their impact get worse. In those environments things tend to move very slowly because more time and energy is spent on politics than making a great game. At the same time, the people sticking around are the ones that are good at internal politics (be it because they share that mindset or because they happen to be good navigating that context), or just the people that care exclusively for their individual contribution. Eventually you get to a point where the different silos struggle when they must collaborate to ship a feature, they try to avoid features that can’t be done within the silo, and the silos tend to have contentious relationships as always someone is getting caught by surprise with something unexpected causing them to question decisions. Because of that, they also gatekeep information, designs (be it game or technical designs), access to the decision-making forums, etc. They tend to show a culture of opaqueness over transparency. The end result is poor considering the amount of resources and energy going in, people are not understanding what’s going on or why something should be done, and they either start playing the political game or stop caring for the game and/or studio as a whole and focus on delivering exclusively what they are assigned. But if we avoid siloing, gatekeeping, and factions, then people feel engaged, they have more context to make better decisions, they feel empowered to do what they think they need to do, they’ll be invested on the end result as a whole, and the end result tends to be a better game and studio.
One shared characteristic of teams where siloing is common or encouraged is that they struggle with decision making. Silos like making decisions internally, and they are opaque in the process of making them as they don’t want other people “messing” with their decisions. In most cases they even see themselves entitled to make those decisions by themselves, but the issue is that it is very rare that a decision impacts a single discipline. Inevitably the silo’s decision surfaces, and the starting point of the discussions is bad because everyone outside the silo often get caught by surprise with a decision that impacts them. Then the people outside challenge the decision not because it is necessarily bad, but just by the nature of not having any knowledge of how that decision was reached. At the same time the silo tries to defend (or even impose) the decision because they are already convinced of what they want, and they don’t think people outside the silo should be involved on “their” decisions anyway. At that point the actual decision has to be mediated by management, and since siloing is tolerated, the decision is often of poor quality because it is just good enough to keep a working relationship.
An example can be made out of a siloed online team. They might make the decision internally that the game should use a peer-to-peer architecture, and they might think that it is their decision to make given that they are the online team. The issue is that peer-to-peer game architectures often require one or all the players to simulate the shared world locally. The decision not to go with a client-server architecture means that the simulation of the shared world needs to fit within the resources available in a single client, and that results in other disciplines having less resources to do their work. So that decision that the online silo might feel entitled to make on its own becomes very problematic, and because the impact of that architectural decision is so broad, then you end up with a bunch of teams putting a lot of time and energy arguing about it. But if siloing is avoided, then the online architecture is defined as a joint decision of a bunch of different disciplines, and where the decision itself has higher odds of having the right set of trade-offs for the game.
Diversity of opinions is more valuable than smooth sailing.
Smooth sailing can be a tempting road, and it is often confused as a characteristic of a team or studio with good culture. When we are smooth sailing there are few disagreements, uncomfortable conversations are mostly avoided, and challenging decisions take a long time to make. Basically, nobody is rocking the boat. This can be enforced by removing the people that cause “too much noise”. The issue is that it tends to be conducive to shipping mediocre games, the people working on them show little engagement, and if we are in any competitive landscape we are bound to struggle. But if we create an honest diverse environment, then creativity starts to flourish. People will feel engaged in the tasks they are doing, invested in the game they are working on, feel that their colleagues are invested as well, and they will provide more and better approaches to turn a good idea into a great feature. In general, that will increase the odds of ending up with a good game that people are happy having shipped, and happy with the team they shipped it with.
It is worth mentioning that people from different backgrounds will express themselves in different ways, and the objective shouldn’t be to get the outliers to align with a dominant culture. Having people of different background is not only helpful in some theoretical sense, it is helpful in a very concrete way as gamers are also of diverse backgrounds. Both gamers and developers want to be able to relate to the game, and the odds of making something relatable are low if we only have people from a single background. A single shared background will increase the odds of smooth sailing, but smooth sailing development is not something that has much of any returns for people that have medium or high standards.
Rock tumbling can be used as a metaphor. What goes into the rock tumbler are just some regular rocks. After some time of the rocks tumbling with some grit and polish, causing some noise and friction, then what comes out are some beautiful rocks. Smooth sailing is like having the rock tumbler off. There is no noise or friction, but the end result looks like the regular rocks we put in. If we have a team with diverse opinions, then the rocks will cause some noise and friction as they exchange and polish the ideas and implementation, and what will come out are some beautiful rocks.
With that said, do keep in mind that not everybody (including you) might be a fit for the team or studio culture. Even studios that people on the outside idealize (like Double Fine Productions) struggle with people not fitting and changes need to be made as seen in the PsychOdyssey documentary. Not fitting is not reflective of someone being wrong or right, it just means that principles are not aligned with rest of the team or studio. That’s why it is important that we understand our own principles that guide us and that we contrast them against the studio’s. Keeping the rock tumbling metaphor, if the rock tumbler spins too fast there will be too much force causing the rocks to crack.
![]() |
---|
Rocks before and after going through a rock tumbler. |
Time and energy spent on a discussion should be proportional to the impact of the decision.
It is rare to find two people within an organization that are fully aligned thinking about problems and solutions. People have different experiences; they have passion and/or knowledge in different areas. But game development is a team sport. At some point or another, if we have a team that isn’t purely happy just by barely shipping something and isn’t just focused on smooth sailing, then odds are that two or more people will get in a discussion about decisions that need to be made. Depending on the level of passion of the people involved, discussions can get unnecessarily long and/or difficult. At the same time, rushing to a decision usually tends to harm the shipping game, and make people feel their opinions are not relevant. But if we keep time and energy spent proportional to the impact of the decision, then that means that we have a shared understanding of the impact, and we are minimizing the odds of ending up in a vicious cycle.
I have two examples that represent the polar opposite sides of this issue. The most common discussion that I’ve seen where the time and energy spent is disproportional to the impact is around code formatting. None of those things impact the shipping game, and for the most part IDEs can adapt, so there shouldn’t be lengthy passionate discussions around that. The opposite example was the desire of management to wrap up a huge architectural decision as “the discussion and exploration is taking too long” even though just a few weeks had gone by. Trying to rush decisions with such a multidisciplinary impact is also very negative.
Time is part of the definition of success.
When I was a teenager in the 90s, I was highly influenced by John Carmack and id Software. They were known for the slogan “It will be done when it’s done”. Being dogmatic about that caused their development to take longer as games got more complex (at least in John Carmack’s own assessment). The release gap between Doom II and Quake was less than two years (which seemed long at the time), but the release gap between Doom III and Rage was more than seven years. That is a long time in the life of a studio and its people, and there are a lot of changes in the competitive landscape along the way. Long developments are not the result of a single decision, they are the result of the thousand decisions made along the way by different disciplines. Given that, we should constantly ask ourselves questions that aim to keep awareness of the time going by. Some examples are:
- “Are the results of our work representative of the time that has gone by?” The answer to the question is usually intuitive rather than something measurable, and the answer will depend on who you ask. But asking the question can help keep awareness of time and potentially drive us to make changes to improve velocity. As a personal example, asking myself that question led me to put a stop to the amount of time I was investing on marginal improvements for a diffuse BRDF. After two sprints it was obvious that time was going by fast, and the end results didn’t reflect those two sprints I had spent.
- “Could we have taken a shorter path to get to the same results or lessons?” If we have a team where diversity of opinion is a value, usually somebody will voice an opinion to take a different path. Asking ourselves this question gives us the opportunity to think if those alternative approaches could have produced results faster. This also gives us the opportunity to think of the paths that didn’t lead anywhere, the rabbit holes that were not that relevant, and any potential scope creep. As a personal example, I realized that some colleagues that were spending time upfront to create some debug views to aid their feature development managed to finish features faster than I did even though initially it seemed like I was making progress faster than them. This was never a competition, but asking myself the question allowed me to realize that I could take a shorter path if I invested in debug tools up front instead of waiting until it was obvious that I needed them.
- “Have we rushed too much?” While the most visible issue is that teams tend to take too long, when they are aware of that, they can also overcompensate. Having a sense of urgency and trying to show progress isn’t bad in itself, but the scope and impact of some work is too big to rush. Foundational work tends to be that way, and rushing to a decision or implementation can have very detrimental results. The issue isn’t just that decision can be far from optimal, the issue is also that for that kind of work it is hard not to struggle with the sunk cost fallacy.
But we should be careful not to fall in the pitfall of assuming that we are doing fine in terms of time because “we learned something along the way”. It is obvious that spending time on something that doesn’t have any return at all is really bad, but having some return shouldn’t be assumed to be enough. At the same time we must keep in mind, the objective isn’t to get things done as fast as possible, the objective is to get good things done in a reasonable time. Foundational work shouldn’t be rushed, marginal improvements should’t take forever.
Be transparent.
People usually link opaqueness to deliberate acts to mislead other people. But something can be opaque just by deciding not to share information such as not showing what you are working on or being vague about it. If you or your team needs to make a private Slack channel (or talk via direct messages) to discuss what’s being worked on, then you are being opaque. We should aim for transparency because it builds trust, it allows other disciplines to make better decisions, it allows other disciplines to help us in some cases, it allows challenges or conflicts to be handled as soon as they raise up, and it gives visibility for the people that need to make sure that production is on track. So be transparent in what we work on, be transparent on what we are not, be transparent on what is going wrong, be transparent on what is going right, be transparent with our bosses, be transparent with the people that report to us, be transparent in what we are concerned about, be transparent on what we don’t think it is concerning.
Over the years I’ve had the chance to work with very transparent teams, very opaque teams, and everything in between. For the most transparent teams there were processes that helped maintain that, and many of those were deliberately avoided by opaque teams. Here are some examples:
- Little use of private communication channels. Having open channels where the team communicates is a huge plus for transparency. While that might seem like it could introduce some unnecessary noise, for the most part the net result is positive, and it is easy to address directly any issues with outside people adding noise. Many times private channels are necessary because you need to discuss something internally, for example if something is going wrong in the interaction with other teams. It is fine to have private channels, you just want to make sure that it’s use is kept to what’s strictly necessary.
- Open single source of truth documentation. For every feature or definition of requirements we should have a single source of truth in terms of information that it is freely readable by the organization. That means that the design and requirements are transparent to everybody, it massively reduces unnecessary discussions and disagreements, and in general reduces the amount of internal politics. For example, if someone is challenging a decision because it wouldn’t work well in platform X, then we can go to that single source of truth documentation and see if that is a target platform. If it isn’t we can just end the discussion right there. This also reduces discussions based on hearsay which is really problematic in the eyes of most people but that is extremely common in teams with high amount of internal politics.
- Written stand-up updates. Having written stand-up updates on an open channel helps transparency as it allows people on the outside to understand what’s going on, what are the current challenges, and what is going well. To make this useful it does require that people in the team provide reasonable updates, where “reasonable” means having an acceptable amount of detail. It isn’t enough to just say in an update “I’m working on X” which information you can get from project management tools such as Jira, Hansoft, or whatever.
- Avoid private meetings where possible. You certainly don’t want to make every single meeting public. A meeting such as a performance review meeting or a leadership meeting should not be public. But for the rest you should aim to keep meetings publicly visible on your calendars. That doesn’t mean that you would publicly broadcast that there will be a meeting, it just means that people should be able to see what’s keeping you and your team busy during the day. This isn’t bound to produce much of any problem since people don’t want to spend time in meetings they don’t have to be a part of, but it does provide the opportunity to be transparent and potentially someone else that has an interest in a given topic can join, or someone can ask for the conclusions afterwards.
- Shared recording of Zoom meetings. In very distributed teams this is more common, but even if you are not highly distributed it still makes sense to do this. Recording and sharing Zoom meetings provides transparency to people that were not part of the meeting, and it allows people to reference those meetings in the future.
- Shared sprint update. Create a space where each team can provide a single sprint update and wrap up the meeting with an opportunity to ask questions across all teams. In my experience that space usually caused people to talk to each other right after the meeting which reflected the benefits of being transparent.
One aspect to be aware of in terms of transparency is that it must be a shared principle. If you are concerned how you personally will look like in the context where transparency is a shared value, then odds are that you will have issues. The same applies if you are concerned that transparency is not a shared value so you end up concerned that you will get reprimanded because “you made us look bad” by sharing something that was intended to be opaque. The nature of the problem is similar to the ones found in teams where code reviews are problematic. If you or someone on your team doesn’t share the value that a code review is an actual review of the code and not of the person who wrote it, then you will face issues.
Feedback should be kind, specific, timely, and candid.
Giving and receiving feedback is always a challenge. There is no recipe that can be followed that ensures that the feedback you provide will be received as you meant it as everybody is different. You will also have challenges receiving feedback, be it because you find it too harsh, difficult to understand, or that you don’t think it comes for the right motivation. But getting and receiving feedback is critical to making the best game and team you can make. The common example for engineers is having to do code reviews, and the only way they are successful is if the feedback on it is kind, specific, timely, and candid, and if the person receiving it understand that what’s being reviewed is the code itself and not the person who wrote it. Feedback must be kind because this is a team effort, because there is a person or team behind what’s being reviewed, and we all want to trust and be happy with the people we work with. Kind feedback requires empathy, and that empathy increases the odds that your feedback will be well received and understood. Feedback must be specific because otherwise the receiver can make wrong assumptions on what’s wrong or right, and because the person or team getting the feedback need to have a good understanding and scope to be able to action on it. Feedback must be timely because otherwise you make it hard or impossible for the receiver to discuss or action on that feedback. Feedback must be candid because otherwise it is hard to understand if the feedback is positive or negative, and if it’s a minor inconvenience versus a major issue. Putting those things together creates more trust among team members and it leaves little room for confusion on what needs to be addressed or celebrated.
Be careful with the pitfall of not being specific and candid in your feedback because “I’m trying to be kind”. Being kind doesn’t mean that you can’t be specific and candid in the same way that trying to be specific and candid doesn’t justify being rude and insensitive. All the facets of this principle are challenging to hit at the same time with different people, but it is very important that you constantly try to hit that.
Engineering Principles
Think in terms of problems rather than solutions.
When I was 13 years old, I started the process of becoming a C programmer. Throughout high school I spent my time working on the Quake scene on level editors, mods, and writing multiple little rendering “engines” in C or “C with classes”. But once I finished high school abroad, I returned to Argentina where my brother, a software engineer at the time, was highly influenced by object-oriented programming and its environment. In his bookshelf he had “Design Patterns: Elements of Reusable Object-Oriented Software” which I read. Given my background as a programmer, I wasn’t “drinking the Kool-Aid”. Over time I realized that the issue for me was that the focus on the use of design patterns led people to think in terms of solutions rather than problems. Thinking in terms of problems doesn’t necessarily mean that we will create a novel solution, it just means that the solution was thought and designed around the actual problem we need to solve, and with the right trade-offs we had to make along the way. If it ended up matching an existing design pattern, then so be it, but it is better to make sure that the problem is solved with the best approach than it is to find the most generic solution that is meant to work on a broad set of problems.
It is worth saying that people that don’t think in terms of problems usually tend to engage in cargo culting.
Prioritize what’s important over what’s interesting.
The development of a game is usually full of interesting challenges. A particular challenge can become so interesting that we start to lose sight of its relevance to the game and its development. I think we all at some points have gotten into rabbit holes that, while the return of investment will not be zero (we always learn something), it may have been an investment of time that was way bigger than the return, and at the expense of something that really needed to be worked on. Having the principle of prioritizing what’s important gives us that chance to ask ourselves if we need to get into a rabbit hole, or even help us stop when we are already in it. It will also let us decide what are the features that are important to have in the game, and which ones are the ones that are interesting but not relevant.
With that said, time must be available to investigate if something interesting becomes important. If you build a culture that has a shared understanding of what’s important, then the odds of people spending time investigating things that are completely irrelevant are low. In that context what usually happens is that someone has the intuition that there is something relevant to investigate but it isn’t concrete enough to put on the schedule as something that must be done, or the person doesn’t have enough material to make the case. So, it is really important that we schedule time for people to investigate something interesting. The need for that is not only because it can benefit the game in the long run, it is because people also need to feel their voice heard and usually want to be able to influence the game in a meaningful way. If we don’t leave any time for that then odds are that really good people will leave to places where they are allowed to investigate, and we can end up with a team that just delivers marginal improvements.
Brian Karis presenting “The Journey to Nanite”. Good example of keeping focus on what’s important and getting the time to investigate. |
Prioritize systemic improvements over localized ones.
When you work in the AAA game development space, or when working on big products (my experience at Autodesk comes to mind), one of the challenges is to think about the overall game or product. Their size makes it so that there is always something that needs to be fixed or improved that impacts just one discipline or feature. That’s also made more challenging if the way of working is close to a pipeline where certain disciplines are clearly prioritized over other ones. But we should try to prioritize systemic improvements that impact multiple disciplines and improve the game or product overall. Localized improvements are not necessarily bad, that work also needs to be scheduled and done, but if we prioritize the systemic improvements then we can do the most impactful and risky work earlier, and it increases the coherency of the game in terms of quality and features.
This is a lesson I learned from the early 2000s where some studios spent engineering and art resources to create localized features such as water with bump mapping. None of the other surfaces looked anything like that, and for the most part it was only impressive the first time you saw it, but in the end the water looked detached from the rest of the world, and it didn’t bring much immersion to the rest of the game. Time would have been better spent focused on systemic improvements than on that specific feature.
![]() |
---|
Screenshot from Expendable. Game released in 1999. |
Prioritize work on low-confidence areas over showing progress on high-confidence areas.
It is always nice to show progress, and sometimes it is critical when it comes to showing progress to executives or publisher. But one pitfall is that the areas where we can show the most progress the fastest are the ones we already know the most about. If you have experience as a studio making single player first-person shooters and your next game is supposed to be an online role-playing first-person shooter game, then it will be tempting to show good progress on the first-person shooter elements. You already know your team is good at that, your team can leverage the momentum of the previous game, etc. But you should prioritize work on the online implementation because, unless you are extremely lucky of doing everything exactly right on the first approach to the problem, then odds are that you will need to reapproach the most difficult challenge multiple times. If we prioritize making progress on the low-confidence areas then over time our confidence will grow, or we will realize that we will need to bring people with experience in the area or we’ll realize we need to pivot away from that.
Design for performance rather than optimizing the implementation.
One generalized issue is that many engineers have been overly worried with “premature optimization” and opting instead to depend on heroic efforts to optimize at the end of the development cycle. This has two big issues, one is that in our current hardware context, big performance improvements come from design changes. The other one is that we have plenty of time at the beginning of the implementation cycle and none at the end. Donald Knuth’s “premature optimization is the root of all evil” wasn’t advice to put optimization at the end. The context of the quote was:
“There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” |
---|
With that said, this doesn’t mean we must be constantly spending time focused on performance as we are trying to make a feature. We can design for performance without having to implement that right away. For example, if our design contemplates running some jobs in parallel across N number of different threads, we don’t have to get into doing the work of scheduling and launching the jobs right away. Instead, we can just run on the same thread, so N equals to 1, and then we know that we’ll get back to the work of going wide later. The design already contemplates that, so that won’t be as problematic as multithreading a design made for single thread execution.
Make performance part of the definition of the feature rather than a constraint.
Thinking performance as a constraint doesn’t seem too problematic, after all the feature must fit within that performance constraint. But the issue is that, in that context, performance seems like a nuance that gets in the way of the feature we have to implement, rather than thinking that performance is part of the feature we need to implement. The difference might seem subtle, but it is usually quite substantial if the feature is big enough. When performance is thought as a constraint a lot of the discussions around the implementation of the feature finds roadblocks because “this just won’t fit in performance” and people will start cutting down the feature. When we think of performance as part of the development of the feature then people start to focus on how to include performance as the definition of the feature. People will start to see how to make room in the frame to implement the feature, see performance as an inseparable part of the design of the feature, and in general there will be less frustration with performance targets getting in the way.
Prioritize workflows over absolute performance.
For better or worse, it isn’t rare to have some contention between workflows and absolute performance. When absolute performance is prioritized and implementation time is not an issue, what can be achieved with a given piece of hardware can be amazing. The problem is that implementation time does matter, and the ability to iterate as many times as possible towards a feature also matters as well. So prioritizing workflows over absolute performance makes sense because otherwise we can end up with a lot of time wasted on iterations which impacts the quality of the shipping game. Two concrete examples that come to my mind are shader graphs and frame graphs. In both cases a hard-written optimized implementation will have better performance, but the problem is that those hamper prototyping, extensibility, are way more expensive to implement and maintain, etc. In the absence of shader graphs artists are extremely limited to what their engineers offer in terms of materials. In the absence of frame graphs engineers are limited in terms of implementing new passes and rearranging the frame. Instead of focusing on constrained implementations that prioritize absolute performance, the focus should be on making performance part of the workflow. That means that performance can be measured easily as shader graphs or frame graphs are being created or modified to ensure that the solution remains reasonable.
Guerrilla Games showing in-game CPU and GPU profiling tools making performance part of their workflow. |
Conclusion
First of all, if you made it this far, thanks for taking the time! This was a pretty useful exercise for me, and hopefully it was helpful for you and made you to think about your own development principles. And if you have some principles of your own, please share them in the comments section.