I reject the idea A.I. can't play by the same rules as the player but that comes with a huge "????" because it depends how much time is spent on it and how much the particular developer actually knows about A.I.
In my college days we weren't even to the PC yet (I'm 64) and we programmed in assembly. A.I. was not taught at all even though computer chess was already a thing. And that was a game that played by the same rules as the human and it was capable of regularly beating the player. Nowadays chess programs scale to player ability and teach. So it amazes me to read post after post filled with excuses why all other games have poor A,I, that must cheat to win and how impossible it would be to have A.I. that can play by the same rules and play well enough to win. It is possible, but it will likely never happen by a developer house churning titles out every couple of years. The best usually come from an independant not restricted by some time constaint alloted him by the game's head producer, which in my experience has never exceeded more than 14 days. So no your not going to produced a decent A.I. in 14 days unless the game is pretty simple.
To have any type of discussion about A.I. in a particular game one has to know the approach the programmer took and what type of A.I. is being used. Until that is known anything written is at best guesswork and at worse meaningless fluff based on false assumptions. I can pretty much spot good A.I. after hours of play. It is extremely rare but it does occur. sadly when it does most players miss it. They are used to poor A.I. sending harassing units early (but never formulating a winning strategy) and if they don't see that early on the wrongly assume the A.I. is bad, when in reality the A.I. is good but is playing to win. It isn't attacking early because first it doesn't have to and second it hasn't built up enough to be ready for the conflict. The A.I. may appear to be doing nothing for 50 turns when in fact those 50 turns are being spent doing a lot in getting ready for war.
I would ask then for readers to a take a few moments and read the rest of this post which will explain the different types of A.I. and those with many hours under the belt playing shadow Empires can then assess what's being used and with that knowledge then know what can be improved upon and what can't.
Today in most cases A.I. still isn't taught to any usable degree in the classroom. And no Developer is going to tell one of it's employee's "Gee go take 3 years paid time and learn A.I." If a developer actually understands and knows A.I. he taught himself on his own time just like myself and the truth is most dev houses put little to no resources toward it.
This is the reason A.I. sucks in most single player games- bar none. It has nothing to do with what can or can't be done.
So what generally happens is the programmer uses a bunch of "IF/THEN" statements. This ad-hoc approach can work in simple terms but it gets unwelding past the most basic stuff. It can 'see and attack' for example...but anything more complex and it can get very hairy and even blow up at a certain point.
In 2005 things got shook up. Monolith Productions released F.E.A.R. It blew people away. it wasn't the graphics or some new level design...it was the A.I. It could think. It took cover. It called for help. It worked with teammates. It flanked. frankly it done things no one had ever seen in a shooter. It was a PC game, and it used a full blown language with libraries (they did later port it to consoles via a third party dev but it lacked the A.I. of the computer version and did poorly).
The guy who did that was Jeff Orkin. That's where i started in my own learning of A.I. The guy does a lot of teaching and writing about A.I. and if anyone is interested in learning A.I. it's a great starting point.
So going back to the "IF/THEN" scene we all sorely experience in our games adding a little bit of structure to a bunch of otherwise disjointed rules maps over somewhat to the most basic of AI architectures—the finite state machine (FSM). The most basic part of a FSM is a state. That is, an AI agent is doing or being something at a given point in time. It is said to be “in” a state. The reason this organizes the agent behavior better is because everything the agent needs to know about what it is doing is contained in the code for the state that it is in. The animations it needs to play to act out a certain state, for example, are listed in the body of that state. The other part of the state machine is the logic for what to do next. This may involve switching to another state or even simply continuing to stay in the current one.
Usually state machines employ elaborate trigger mechanisms that involve the game logic and situation. For instance our “guard” state may have the logic, “if [the player enters the room] and [is holding a gun] and [I have the sword of Smiting], then attack the player” at which point my state changes from “guard” to “attack”. Note the three individual criteria in the statement. We could certainly have a different statement that says, “if [the player enters the room] and [is holding a gun] and [I DO NOT have the Sword of Smiting], then flee.” Obviously, the result of this is that I would transition from “guard” to “flee” instead.
So each state has the code for what to do while in that state and, more notably, when, if, and what to do next. While some of the criteria can access some of the same external checks, in the end each state has its own set of transition logic that is used solely for that state. Unfortunately, this comes with some drawbacks.
First, as the number of states increases, the number of potential transitions increases as well—at an alarming rate. If you assume for the moment that any given state could potentially transition to any of the other states, the number of transitions increases fairly quickly. If there are 4 states each of which can transition to 3 others for a total of 12 transitions. If we were to add a 5th state, this would increase to 20 transitions. 6 states would merit 30, etc. When you consider that games could potentially have dozens of states transitioning back and forth, you begin to appreciate the complexity. What really drives that issue home, however, is the realization of the workload that is involved in adding a new state to the mix. In order to have that state accessible, you have to go and touch every single other state that could potentially transition to it.
The second issue with FSMs is the predictability. The player soon learns the behavior and begins exploiting it because the same trigger occurs and always ends up with the same result. A Civ game comes to mind. Can anyone guess which one if not all of them use FSMs? Civs sending a unit or two repeatedly? FSMs. Works great in shooters. Absolutely sucks in strategy games.
At this point, it is useful to point out the difference between an action and a decision. In the FSM above, our agents were in one state at a time—that is, they were “doing something” at any given moment (even if that something was “doing nothing”). Inside each state was decision logic that told them if they should change to something else and, in fact, what they should change to. That logic often has very little to do with the state that it is contained in and more to do with what is going on outside the state or even outside the agent itself.
For example, if I hear a gunshot, it really doesn’t matter what I’m doing at the time—I’m going to flinch, duck for cover, wet myself, or any number of other appropriate responses. Therefore, why would I need to have the decision logic for “React to Gunshot” in each and every other state I could have been in at the time? There is a better way.
The behavior tree.
It separates the states from the decision logic. Both still exist in the AI code, but they are not arranged so that the decision logic is in the actual state code. Instead, the decision logic is removed to a stand-alone architecture called the behavior tree.
The main advantage to this is that all the decision logic is in a single place. We can make it as complicated as we need to without worrying about how to keep it all synchronized between different states. If we add a new behavior, we add the code to call it in one place rather than having to revisit all of the existing states. If we need to edit the transition logic for a particular behavior, we can edit it in one place rather than many.
Another advantage of behavior trees is that there is a far more formal method of building behaviors. Through a collection of tools, templates, and structures, very expressive behaviors can be written—even sequencing behaviors together that are meant to go together.
Now add a Planner. While the end result of a planner is a state (just like the FSM and behavior tree above), how it gets to that state is significantly different.
Like a behavior tree, the reasoning architecture behind a planner is separate from the code that “does stuff”. A planner compares its situation—the state of the world at the moment—and compares it to a collection of individual atomic actions that it could do. It then assembles one or more of these tasks into a sequence (the “plan”) so that its current goal is met. In SE that might be to build it's economy or it's defenses before finally attacking the nearest percieved threat.
Unlike other architectures that start at its current state and look forward, a planner actually works backwards from its goal . For example, if the goal is “kill player”, a planner might discover that one method of satisfying that goal is to “shoot player”. Of course, this requires having a gun. If the agent doesn’t have a gun, it would have to pick one up. If one is not nearby, it would have to move to one it knows exists. If it doesn’t know where one is, it may have to search for one. The result of searching backwards is a plan that can be executed forwards.
The planner diverges from the FSM and Behavior tree in that it isn’t specifically hand-authored. Therein lies the difference in planners—they actually solve situations based on what is available to do and how those available actions can be chained together. One of the benefits of this sort of structure is that it can often come up with solutions to novel situations that the designer or programmer didn’t necessarily account for and handle directly in code.
As I mentioned Jeff Orkin used them in Monolith’s shooter, F.E.A.R. His variant was referred to as Goal-Oriented Action Planning or GOAP.
To sum it up there's a lot to A.I. and I haven't even touched Utility based systems or NNs (neural network) and i'll note my own A.I. also includes personalities (20) that give weight to what the A.I. might do (a janitor will run and hide from a monster but a soldier will attack) but there's a small chance the Janitor is Bruce Willis so I'll give it 90-10 weight. But this way Patton can play like Patton and Rommel will behave like Rommel and everything is much less predictable.
So What does SE use?
I know it's not just "IF/THEN" statements. I know it's not just FSMs. How in depth and then we know what is possible.
What's not there?
In my A.I. you have a blackboard (where all game data is stored) then a Needs section and then the decision section. But in between the Blackboard and the Needs block is an A.I. blackboard section. This section is to keep the A.I. honest. It filters out information the A.I. should not know. --- if the guard didn't hear or see you he won't know your location. If such was in SE the same would be true--but there isn't one. Delete your defending troops in your cities and see how fast ALL the A.I. factions declare war on you. It knows your strengths, position...etc at all times.
But keep in mind everything is checks and balances. Turn times vs what the engine can handle, time wise in terms of development and so on. Often these things would be great in an unlimited engine with unlimited funds and unlimited time. You have to decide what you want in and what you can't do.
Hopefully what's here is very good. What's missing isn't having a huge impact on game play. But I haven't the play hours to determine that but I am working on it.