I’ve always believed it’s hard to judge code, because the criteria changes by the context. Is comprehension or speed more important? Well, is it high or low level? How about debugging, memory, first-time-to-complete, iteration time, modularity, etc. etc? All depends where and why (and who) is doing the writing. These tradeoffs form the basis of software architecture, and software system design.
Fair enough. Today I think I found a new measurement. My new metric is “organization” – how spread out the relevant code is from related systems, and how often you have to go modify and tweak the original system to get what you want. Renderers tend to be very “organized”. Component architectures, not so much. It’s similar to modularity and extensibility metrics, but on the receiving rather then the giving end.
Why is that important to me? Because I typically work at a very high level, where I’m adding small pieces to 5 or 6 different things and then calling them from somewhere else. And the closer together those things are in physical space, the easier it is to understand what’s going on. When you’re extending or iterating on lots of systems that are spread out, it’s significantly harder then if all additions are made in one place, organized under a common, well used API.
I think it’s one of the reason scripting is so effective. Note organization is not unique to scripting – you could certainly organize a C++ gameplay interface this way. But we usually don’t make the time. Scripting constrains you, forces you to devote the time to make an organized way of writing very high level code, and that organization makes understanding and adding/modifying code much easier. If you want to avoid scripting, it might be a good idea to provide a similar benefit in your very high level gameplay architecture.
That and hot reloading. Man, I love fast iteration times.