So I’ve been mulling this over lately.
Part of the fun in multiplayer games comes from the fact that you get to play against other people. They’re unpredictable, challenging, interesting. Much different from AIs you eventually figure out and repeat the same old patterns against over and over. Real players engage you, make you doubt their moves. A randomly acting AI would shoot in that general direction, but miss an important mark: purpose.
Good players calculate every move they make. Since game characters run on computers, wouldn’t they be able to do the same? More or less, yeah, but it’s a lot of work. Additionally, players will figure each other out, read each other’s play style, and adapt their own to counter it. Can we make AIs perform in a similar way, without delving into the kind of machine learning you need dedicated hardware for?
A simple step would be to record the sequences in which players activate their abilities. Is ability B frequently follow A? Then we better prepare for B if we see A getting used. Similarly, what does the player do when we throw something at it? Does it move away from us, or towards us more often? Adjust our aim for that. In short: keep track of actions and follow-up, and predict based on that.
That knowledge could then be used as a basis for playing more calculated on the offense side of things. Does the player skirt around us, keeping us as the edge of his attack range? We probably want to rush him down if we can. If the player does that to us, however, do we want to kite back or just take that brawl? How quickly can we kill the player? How quickly can the player kill us?
This can get very complicated very quickly, and at some point you want to move from “dumb learning” to “smart learning”, and then you need to wait a couple of years for consumer hardware to catch up and do that in real-time.