From: Living in the fair city of Melbourne, Australia
Re the so called bypass issue. First off, this is not a bypass issue. Rather it is a case of the force replanning its route. This can happen for a number of reasons but often happens after a reassessment or routing. The engine doesn't allow for you to set a route for the force to follow. Rather it allows you to set a series of waypoints. We kill off waypoints as they are passed. If we don't do this then when it comes time to replan you would all complain when the force goes back ten kms to the first waypoint even though it was now on the eight waypoint. We had all those complaionts when BFTB was first released...remember. So, when the AI replans it uses whatever waypoints remain. So if if you are on the eight of ten then it will replan with just the remaining three waypoints and this may well mean that it uses an avoidance route to get to the first remaining waypoint and that this route differs from what you had originally marked out. That is a feature of the engine and I don't envisage changing this for the forseeable future as there are far more significant issues to attend to.
You think it's not significant, Dave? Why? The force meets an obstacle and, if the lead element has already eaten up all the waypoints, it replans around the obstacle.
If the lead element has eaten all the waypoints - or more precisely, has completed the execution of the plan you assigned to the force in question - then it will be adopting a "Defense" stance. That is, it will be waiting for further orders. What Dave is telling you is that the AI will try to comply to your orders in a way which is consistent with (1) the preservation/aggression/ammo conservation policies you set for it, (2) the abilities of the commanding officers of the force - they're not all geniuses, (3) the situation of the enemy forces and how much of a threat they do convey - and this threat assessment it also does by those commanding officers and (4) an assessment of feasibility - you won't get in this game a force to engage the enemy at crazy odds.
Are you saying that if we want it to do otherwise (for instance, attack the obstacle) we should tick 'attack' (and hence this is not a very significant issue).
You can't 'edit' order like that at the moment. You need to replot your orders. Or, alternatively, you should (1) plot a move waypoint into an area where the force can't be engaged by the enemy (which may mean to fall back), (2) plot an attack order on the obstacle, using the previous waypoint as the FUP for the attack.
And you say it's not a 'bypass' but what's the diff? (I assume there is one, from what you say). And why bother with a box titled 'avoidance' or 'quickest' if it will just go round anyway? But what about if I just want it to go past on that very route I have planned? Are you saying I should place lots more waypoints to ensure this?
On the contrary, you should be placing less waypoints or revise the existing set of waypoints and consider - honestly - if they make sense given the new situation.
The AI hasn't human powers of reasoning, it's circumscribed to its knowledge and your instructions (although its behaviour can indeed look very human ). To draw an analogy, imagine you have one of these nifty Roomba robots with the added feature that you can tell it where to clean and also what places it should be avoiding (say you want it to avoid one specific room for some reason). It also has an accurate map of your house - so it knows how to find a path between any two different places in your house. The robot can't open closed doors, but can push doors which are left ajar.
Now, you tell the robot to clean room X, avoiding room Z. Let's say that your house layout is such that there are two possible paths from the current location of the robot, Y, and X. One of the paths goes through Z and the other is a crazy detour which involves the robot getting out through the dog hole in the front door, across the front yard - and possibly over the flower pads there - into the backyard, then through the kitchen until it gets to Z. Possibly it will get muddy wheels in the backyard and make a mess of the kitchen.
But it will get eventually into X, and clean it. You'll be probably be quite mad at it nonetheless
Now imagine that the kitchen door is closed. You come back home in the evening and you that X is dirty, and the robot is nowhere in the house. You then check the backyard, and you see that the poor thing has run out of power trying to push the kitchen door (and probably has scratched the paint). Again, you'll be probably quite mad at it.
The robot knows how to move around, knows how to clean a room but it doesn't have the possibility to reason about the meaningfulness of what it's doing. A more human-like robot, would have been programmed with the knowledge that if it goes through the front yard the flowers might get trampled over, that if it goes through the backyard it will get muddy wheels and make a mess of any places it goes through afterwards, and ready to recognize when a door is open. Then it would use this information to consider the possible scenarios and tell you that the task you set may well have undesirable side effects (or not feasible at all).
There exists indeed the algorithms to do this, for any house you can imagine and for a substantial number of conditions. But the robot would need a quite good CPU, a few Gigabytes of memory and a few minutes having that CPU engaged at 100% capacity.
Extrapolate the house to a really huge building, with several hundred rooms. Be ready to get your robot "thinking" for several hours before it being even able to tell you about the task being sound or not.
Let's say we make the problem more complex: doors can open and close themselves randomly, there's a dog roaming the house - so its location is not known - that might get startled if it sees the robot and topple it. This - to tell you beforehand whether your orders make sense in any conceivable situation - is not computationally tractable (and if it is, probably there's no definite answer, it might or might not be possible). You'll be stuck with a robot which won't be able to find that conclusion, will have to try its luck (and possibly fail), very much as we humans do This is actually a famous open problem in AI taken from this game: http://en.wikipedia.org/wiki/Hunt_the_Wumpus
Wrapping up: in Command Ops you can see an extremely flexible and powerful AI assistant that helps you getting the show to move on but needs regular supervision.
It's not a killer issue in one sense - I can watch out for it and incur an orders delay and issue new orders (though that's far from ideal). But in the examples I sent you the AI planned a route right through what I KNEW was the heart of enemy occupied trerritory. That's as irritating as any of the other little issues that hurt the gameplay (or the realism). No? It can certainly make or break a tight, time-constrained scenario if you are planning around this enough, with all the consequent delays.
Clausewitz equated war to a game of cards. Sometimes you get good cards dealt, sometimes you don't.
< Message edited by Bletchley_Geek -- 2/7/2013 3:46:12 AM >