Recent technological advances have increased the demand for high level artificial intelligence.  The increased capability of computers allows more processing power to create realistic AI. Games can play an important role in AI and it proves to be a flexible environment in which to perform testing.

The holy grail of academic AI is to create an artificial man, one that is capable of fooling a human into believing that he is real himself. First Person Shooters, like Close Quarters Conflict, offer the opportunity to create an artificial player based on human behaviours.

This paper puts forward Neural Networks as an alternative to other methodologies by designing a Bot capable of navigation, fabricating rudimentary emotions, communication with other NPC’s, attacking, aiming, firing and making tactical decisions. The aim is to make this Bot fun and capable of challenging human opponents. The machine learning capabilities are based on various Feed Forward Multilayer Neural Networks which are trained by Genetic Algorithms.

This is just one approach to Bot development in First Person Shooters.

Chapter 1 – Introduction

As Senior System Analyst of xxxxxxxxxxx.com, I have been approached to improve the AI of the Bots in the First Person Shooter Tournament and Team Play. This is a traditional FPS, however the Bots are not challenging.

FPS holds tournaments in which users have to play against the Bots, record a score and the highest score wins. In Team FPS, a group of up to 4 users play against the CPU. The Bots do not work together and overcoming them is easy. Scores are recorded and the highest score wins.

Imitative learning techniques have adopted by many in the field of artificial intelligence as they are seen as a stepping stone towards creating a truly intelligent artificial man. The environment that encases computer games is an abstraction of the real world which is governed by the same laws of physics. This would allow any technique that was successful in this environment to be adaptable for execution in modern life.

1.1    Project Brief

The project brief as provided at the beginning of this project, October 2007:

In your job as senior system analyst, solution provider or technology executive in a game company, you have been asked to identify an AI technology tool or technique to be used for an application or solving a problem – it is up to you – the student – to choose the problem domain or application – an example would be an intelligent NPC in a FPS game.

Your brief requires you to identify the maturity and the applicability of the commercially available or feasible AI technologies, tools or techniques. Your brief is to prepare, a comprehensive but concise, report to your management critically evaluating the use of the AI technologies and techniques in your chosen application or problem, explaining the rationale for your choice and to develop and present a case study with a high level design specification for a proof of concept demonstrator.  You are also asked to briefly evaluate the AI technology in general terms of other potential applications in the business domain of the company.

1.2    Scope of document and Problem

The purpose of this document is to specify exactly what can be achieved by completing this project and the processes involved in order to achieve this. This document details all ideas and the research performed in order to implement those ideas successfully. It also provides a comprehensive description of all the intentions of this system and the perceived system capabilities.

It is envisaged that this document could be used by a developer to build an example NPC using this as a template.

As there are many techniques available, all must be researched thoroughly.

Chapter 2 – Background

Detailed in this section are available AI techniques which may be incorporated into the application.

2.1    Finite State Machines

Finite State Machines (FSM) are also known as Finite State Automation (FSA). This models the behaviour of the system with a set of predefined conditions. Depending on the condition, the system can determine which state should be utilised. These are connected by transitions that are triggered by changes in the world. Figure 2.1 shows a general FSM implementation.  [ 1 ]

General FSM Implementation


There are several different types of FSM’s. These include Markov, Fuzzy, Multiple FSM’s, Polymorphic and a Stack based FSM. FSM’s can also be incorporated with other techniques.

For an FPS system, Figure 2.2 shows a suitable FSM. [ B ] To make this more manageable, a stack should be incorporated. This will allow the system to create more states whilst not increasing the complexity of the FSM. It would also make Team CQC bots easier to design as more states are required. This example has the following states:

  • Attack – Engage in combat with the player
  • Evade – Health is low, run in the opposite direction of the player
  • Chase – If the player runs away, chase him
  • Wander – Explore area, Search for ammo / health if required
  • Spawn – Each time NPC dies, it returns to this state.

D – Dead

E – Enemy in Sight

S – Hear a Sound

L – Remember



Whilst this system will satisfy the requirements, it will not render the Bot unpredictable unless incorporated with other techniques.

2.2              Action Oriented AI – Pattern Movement, Chasing and Evading

Action Oriented AI is a broad section as it deals with Pattern Movement and, Chasing and Evading.

Pattern Movement deals with creating a predefined pattern which the Bot will follow. This would allow it to patrol all game areas. This is restricted in that all levels would require maps created. This would not make the Bot robust to new environments.

As the name suggests, Chasing and Evading is comprised of two parts. The first part is comprised of deciding whether to chase or to evade while the second part is comprised of carrying out the chase or evasion. Upon closer examination, this problem contains a third element, obstacle avoidance which is relevant to our system. This adds complexity to the problem which is shown in Line-of- Sight Chasing. It is the most effective chasing algorithm however it is not optimal if obstacles are present. It can be employed effectively with other algorithms like FSM’s and Neural Networks.

2. 3   Rule Based Systems

Rule based AI systems are one of the most commonly used systems worldwide. All possible outcomes are predefined and the Bots would appear to be realistic as long as the user acts within the parameters. This is RBS’s main weakness. It is not flexible in uncontrolled environments. The structure is generally made up of IF THEN statements which are easy to define. These can be expanded upon by using AND or OR. If there is a large number of rules, this can become computationally and memory intensive. At times, conflict resolution can occur. This can be resolved by assigning weights to each choice but at times this can be incorrect, especially when dealing with similar cases.

2.4    Scripting

Scripting is any basic programming language customised for a specific game. It can also be tailored to allow players to edit scripts.  It may be seen as a combination of RBS and FSM systems. The easiest implementation would be to have the game parse text files containing commands. This would also allow scripts to load only when they are required. This is a plausible technique as it can cover the Bots behaviour and interaction with other Bots, Verbal Interaction and Level Triggers can be used.

2.5    Path Finding

With the inclusion of obstacles a complex navigation system is required. There are many types of path finding algorithms. Area Awareness System and A* are two commonly used techniques.

The AAS system is sufficient in areas that have no obstacles. It utilises 3D bounded hulls, known as areas, which are selected on a map like nodes. These areas can be traversed easily however they implement straight line functionality.

The A* algorithm is guaranteed to find the best possible path between two points. Nodes have to be placed and the algorithm can then determine the path between each one. To keep CPU cycles low, the minimum possible number of nodes is utilised. It starts the search in all direction computing a cost for each direction. It then derives scores from these results and the best scoring path is chosen. This can be adapted to different types of terrain and in situations where the shortest path is not the best path.

Both of these algorithms are suitable but require nodes to be predetermined. An algorithm which could navigate in any environment would be more suitable.

2.6    Genetic Algorithms

This area is based on biological genetics in which it mimics evolution. Darwin proposed that only the fittest survives in a species. With this in mind, the traits are recorded and a population is created. They are put under the environmental constraints that the gameplay consists of and only a number survive. Their traits are encoded into chromosomes and combined into the next generation. This process is repeated until the best solution is attained.

This algorithm can be used to evolve and train neural networks. This shall be covered in the next section.

2.7    Artificial Neural Networks

Artificial Neural Networks attempt to replicate the human brain’s functionality on a much smaller scale. It is a similar structure which is composed of neurons which are connected via dendrites. The human brain contains approximately 1011 neurons whilst each neuron would have 104 inputs. For this system, a range of inputs would be determined and a value would be inputted for each. The system can be configured to have many outputs. It can consist of many layers; each layer is composed of a certain number of neurons. For example, a simple system would consist of an input layer, hidden layer and output layer. All neurons from the input layer are connected to the hidden layer, from which all are connected to the output layer. This is known as a Feed Forward Multilayer NN or Multilayer Perceptron. Figure 2.3 shows this implementation. [ A ] Before the inputs pass to the hidden layer, they are weighted. The weights are the most important aspect as they define the behaviour of the ANN. The value of the weights can be determined by training or evolving the ANN with genetic algorithms.



This technology shall form the basis of the Bot as it is extremely adaptable and is capable of learning.

2.8    Emergent Behaviour – Flocking

This area concentrates on mimicking animals or insects. For example it replicates the nature of sheep grazing together or birds flocking together. It is comprised of three basic rules:

1.         Cohesion: Each Bot should steer towards the average position of its peers.

2.         Alignment:  Each Bot should align itself towards the average heading of its peers.

3.         Separation: Each unit should avoid collisions with neighbours.

For these rules to be successfully implemented, a steering model has to be created. This must take into account that the Bots must not collide whilst not straying too far away. The alignment of the group depends on the angle of the current leading Bot relative to the following Bots. By determining the view radius of each Bot, the system can be made aware of its neighbours and thus avoid collisions. Obstacle Avoidance can also be implemented.

This behaviour will be implemented in Team CQC.

2.9    Fuzzy Logic

This area of AI represents another suitable technology, Fuzzy Logic. It is ideal for incorporating simple emotions and would create a challenging Bot. The theory underlying Fuzzy Logic states that problems may be solved in an imprecise manner. The father of Fuzzy Logic, Lotfi Zadeh, described “fuzzy logic as a means of presenting problems to computers in a way akin to the way humans solve them.”  He also stated that “the essence of fuzzy logic is that everything is a matter of degree.”

An excellent implementation of a Bot using fuzzy logic has been created for the Quake II environment. [ 2 ] A basic overview is shown in Figure 2.4.



Only two emotional states are created which map a Fight or Flight Mechanism, noted as Aggression or Fear respectively. There are six environmental perceptions used by the EmoBot. Our system could incorporate many more. Each of these has three different states.

1.         NPC Health: [ Bad, Fair, Good }

2.         NPC damage: { Low, Medium, High }

3.         Player health: { Bad, Fair, Good }

4.         Player damage: { Low, Medium, High }

5.         Distance: { Near, Medium, Far }

6.         Angle: { Small, Medium, Large }

Additional States

7.         Ambush: { Player and NPC: Health, Position }

8.         Searching: { Player and NPC: Health, Position }

An example of a fuzzy set is shown in Figure 2.5.

Fuzzy Partition


For example IF Distance is Far AND Bot Health Is Good THEN Fear Is Low

IF Distance is Near AND Bot Health Is Bad THEN Fear Is High

Based on this Fear and Aggression Spectrum a Fear Aggression Space can be derived.



Each of these states could be altered for each specific Bot. Each emotion would have a corresponding action. This type of Bot would be extremely suitable for our system.

2.10  Agent-Oriented Programming

This technique is based on Object Oriented Programming. OOP uses methods to exchange information between Objects. Agent Oriented Programming refers to these Objects as Agents and extends the OOP Framework by adding states to these objects. These states could consist of emotions, beliefs or decisions. It is similar to Scripting and RBS systems as rules have to be defined. It is a plausible technique as all of the Bots actions can be covered but it would be restricted by what rules are defined.

Chapter 3 – Critical Evaluation

3.1    Artificial Neural Networks

Many studies have covered the use of ANN’s but have rarely implemented the entire Bot using only ANN’s. Other technologies can render some actions less expensively than ANN’s and so are preferred. Time spent Training the networks can be expensive. Debugging is more complex than other methods.

The AI must be trained using example. Using training data of human players trains the network to behave more realistically. One of the strengths of these networks lies in its pattern matching prowess especially when dealing with uncertainty.

In a game environment, most objects are well defined which allows traditional methods to be successful. By utilising neural networks, the Bot is able to generalise, thus enabling it to deal with the unexpected.

Utilising 5 smaller NN’s as opposed to a single large network presents various rewards. Smaller NN’s are more accurate and require less computational power. The physics engine will deal with the obstacle avoidance.

3.2    Genetic Algorithms

In order to optimise the weights used in training, genetic algorithms are applied. This can be done by tasking the GA with working out the evolution of these weights.  Back Propagation could be used but it is suited to simpler networks. Optimising consists of mapping all the weights of the NN connections between neurons and the bias value in a GA chromosome.  During the evolution, the Bots that survive are able to pass on their traits to the next generation. These traits are encoded in chromosomes which are combined with the next generation in process known as the crossover. Random mutations can also be introduced. If this makes an improvement, it is passed on to the next generation. The population size should be set to 40.

Besides the efficiency of this algorithm, it is much quicker than other methods. Over time, realistic emergent behaviours should emerge.

Chapter 4 – High Level Design Specification

4.1    Training Algorithms

NN’s must be trained with relevant data from the environment. All actions that wish to be implemented must be represented by an algorithm. This allows us to model realistic behaviours.

Suitable training data could be generated by recording 2 human players in a death-match for a period of 60 minutes. A diverse level which includes elevated areas should be utilised. The behaviours of the players would be quite complex, in order to record them they are broken up into smaller simpler actions.

New Training data would be required to train the Team Bots. 8 human players would have to partake under the same conditions.

Figure 4.2 shows the ANN’s to be used for our general system. Alex Champandard’s tutorial [ 3 ] was consulted to create the Navigation system. For Combat, Weapon, Aim and Fire, [ 4 ] and [ 5 ] were referred to.

To extract meaning full data, the Bots’ perception of the player must be constructed. Values are calculated on each time step. The Bots current position and his opponents position is recorded for each frame. The opponent’s current directional vector and the vector running from the Bot to the enemy are computed. The latter represents the Bots view vector. The opponent’s velocity relative to the Bots field of view can be derived by calculating the projection of his directional vector onto the vectors parallel and perpendicular to the player’s view angle.  Figure 4.1 shows the reconstruction of the Bot’s field of view from the low – level data that is available. [ 5 ]



V parallel         =       (a x c) / | a |

V perpendicular   =     (b x c) / | b |

a          =          Bots view vector

b          =          b is perpendicular to the viewing vector

c          =          direction of opponents motion

For the opponent’s velocity in the vertical plane, the change in his position on the Z-axis between successive frames is calculated as:

V vertical      =          zt – z t-1

Elevation will play a factor in the Bot’s perception of the player’s movements. If the opponent is directly above or beneath the Bot, the Z component of the calculations will be of no use. If the players are on the same horizontal plane, and the opponent ascends or descends towards the Bot, the Bot’s pitch will need to be adjusted. If the opponent is above or below the Bot, and is moving along the parallel vector, consequently the Bot will have to adjust his pitch.

To compute these calculations, the parallel and vertical velocities according to the angular elevation of the opponent, from the Bot is derived by:

V parallel    =              sin ( Ɵ ) (( a x c) / | a |)

V perpendicular   =        cos ( Ɵ ) ( zt – z t-1 )

By utilising these formulas, a replica of the human eye system is recreated and the Bot is capable of collecting the same visual information.

The Strategy NN’s training data requires information from the world environment. This data includes Health, Ammo, Armour, Bot Position, Bot Direction, Last Enemy Position, Last Enemy Direction, Last Seen Timer, Distance to Object In Front, Furthest Possible Distance in X and the Furthest Possible Distance in Z.

Training data for the Combat NN should consist of examples where the enemy is in sight and for a few seconds after he goes out of sight. This covers situations when enemies hide behind objects, reappear and fire.

For the navigation system, the players’ movement from point to point in a map should be recorded. This would also simulate obstacle avoidance as the players would not bump into objects. By breaking down the route into smaller sections simplifies the problem.

The Weapon NN’s training data is comprised of the distance between the player and the opponent, and a bitmask representing weapon availability.

The Fire Network’s training data should consist of the current weapon, distance from the opponent, and the angular difference between the vector from player to opponent and the players current aiming vector.

For the Aim NN, the formulas described previously are included in the training data. To achieve a realistic aiming structure, the previous inaccuracy is taken into account. The training data includes the current weapon, distance from the player to the opponent, the perpendicular, parallel and vertical velocities of the opponent, and the previous angular inaccuracy for world information.

4.2    General Bot

The following figure outlines an entire ANN structure which could then be streamlined after testing. A bitmask is used to indicate which weapons are available with ammunition.



4.2.1 Strategy NN

This network is responsible for utilising the correct NN. The inputs require information about the world.

1.         Health

2.         Ammo

3.         Armour

4.         Bot Position (Vector)

5.         Bot Direction (Vector)

6.         Last Enemy Position (Vector)

7.         Last Enemy Direction (Vector)

8.         Last Seen Timer

9.         Distance to Object In Front – Sensor

10.       Furthest Possible Distance in X – Sensor

11.       Furthest Possible Distance in Z – Sensor

4.2.2 Combat NN

This sub-behaviour  deals with the Bot’s movement when engaging the enemy. This network deals with learning movements such as left, right, forward and back. It can also determine whether to jump or crouch.

The inputs required are:

1.         Strategy Neural Network

2.         Bot Position (Vector)

3.         Bot Direction (Vector)

4.         Last Enemy Position (Vector)

5.         Last Enemy Direction (Vector)

6.         Enemy in Sight

7.         Last Seen Timer

8.         Enemy Active Weapon

9.         Enemy is Shooting

10.       Ammo

11.       Bot Active Weapon

12.       Health

13.       Armour

4.2.3 Weapon NN

This network controls the weapon selection depending on the ammo. The inputs required are:

1.         Strategy Neural Network

2.         Bitmask

3.         Distance between Bot and opponent

4.         Primary Weapon

5.         Pistol

6.         Grenades

7.         Ammo

Output generated is the active weapon.

4.2.4 Fire NN

This network establishes whether the Bot should fire or not. The inputs required are:

1.         Aim Neural Network

2.         Weapon Neural Network

3.         Combat Neural Network

4.         Distance between Bot and opponent

5.         Angular Difference between opponent and Bot and Bot’s current aiming vector.

Output generated is 1 if gun is fired or 0 if otherwise.

4.2.5 Aim NN

This network deals with controlling the aim of the Bot in concurrence with the players movements. The inputs required are:

1.         Weapon Neural Network

2.         Fire Neural Network

3.         Distance between Bot and opponent

4.         Parallel Velocity of opponent

5.         Vertical Velocity of opponent

6.         Perpendicular Velocity of opponent

7.         Previous Angular Inaccuracy

The output generates a new angle.

4.2.6 Navigation NN

This NN deals with moving the Bot whilst not engaged in combat. The physics engine would give the Bot values for the distances and vectors required.  Traditional robots utilise sensors to navigate obstacles. The Bot will be fitted with 3 sensors which indicate the distance to an object in front, furthest possible distance in the X and Z directions. When an obstacle is encountered, the furthest possible distance in the X and Z direction will be calculated and the longest distance will be chosen. Figure 4.3 shows this process. There is 3 obstacles and the blue arrow shows the path that will be chosen. This allows uncertainty to be expressed. [ 3 ]





The Bot requires the ability to climb stairs or jump objects. The sensors can determine the furthest possible walking distance in a direction before encountering an obstacle. The bounding box of the Bot is virtually simulated walking towards the obstacle. It moves one step / jumps upwards and if the obstacle has not been traversed, the direction must be changed.

The inputs required are:

1.         Strategy Neural Network

2.         Bot Position – X

3.         Bot Position – Y

4.         Bot Position – Z

5.         Distance to Object In Front – Sensor

6.         Furthest Possible Distance in X – Sensor

7.         Furthest Possible Distance in Z – Sensor

8.         Last Enemy Position (Vector)

9.         Last Enemy Direction (Vector)

10.       Enemy Last Seen Timer

11.       Ammo

12.       Health

13.       Armour

14.       Grenade

15.       Pistol

16.       Primary Weapon

17.       Jump

4.3    Team CQC Bot

For Team CQC, basic emotion and flocking between the other NPC’s is desired. This requires adding two more networks. Our structure is shown in Figure 4.5. Two more sensors are added to the Bot, one that relays how many enemies are alive in the FOV and the other which indicates which is closest to the Bot.



4.3.1 Emotion NN

This NN gives our Bots their emotional abilities. They are rudimentary feelings but suit a FPS. The emotions that are mapped are Fear, Revenge, Anger and Hunt. The training data would consist of all the inputs. The previous principles of dividing complex actions into smaller segment would be applied in order to extract meaning full data. It would then choose a course of action depending on the current situation.

  • The initial emotion is Hunt. Anger is triggered once the enemy has been spotted.
  • Fear is triggered by low health.
  • Revenge is triggered if another NPC is killed in the field of view of the Bot.


If the Bots energy is low, it begins to run away from the enemy in search of health, ammo and armour. Depending on which enemy is in view, the Bot chooses his direction by determining the vector in the opposite direction in relation to the enemy’s current position and directional vector.


If a Bot that is in the field of view of another Bot dies, the Revenge NN is triggered. This may be bad tactically, but it does make the behaviour of the Bot more interesting. If the Bots tend to be losing heavily, the weight associated with this can be altered.


If the Bots are angry, they all perform a melee attack on the opponents moving fast towards the players.


The Hunt Network employs different tactics. The team could splits into 2’s. This is achieved by randomly assigning the NPC’s and setting different directional vectors. They could throw grenades from afar before attacking. One NPC can be sent to a raised position to snipe. If the team are sustaining heavy losses, a different tactic can be selected.

This network has the following inputs:

1.         Strategy Neural Network

2.         Ammo

3.         Armour

4.         Health

5.         Enemy Alive Sensor

6.         Closest Enemy to Bot Sensor (Vector)

7.         NPC Killer Direction (Vector)

8.         NPC Killer Position (Vector)

9.         NPC1 death?

10.       NPC2 death?

11.       NPC3 death?

12.       Link with NPC1

13.       Link with NPC2

14.       Link with NPC3

15.       Raised Elevation

16.       Current Weapon

17.       Grenade

18.       Snipe

19.       Enemy Last Seen Timer

The output of this network will determine the next course of action. They include:

  • Link with another NPC
  • Throw Grenades
  • New Position
  • New Direction
  • Snipe
  • Initiating Combat NN
  • Initiating Navigation NN
  • Initiating Communication NN
  • Initiating Weapon NN

4.3.2 Communication NN

This network deals with the unit acting as a team. The team is able to stick together by implementing flocking principles. From the inputs, the Team is able to determine the positions of the other Bots which enables them to stick together as a cohesive unit. The training data would consist of all the inputs.

The inputs are

1.         Strategy Neural Network

2.         Emotion Neural Network

3.         Bot Position (Vector)

4.         Bot Direction (Vector)

5.         NPC1 Position (Vector)

6.         NPC1 Direction (Vector)

7.         NPC2 Position (Vector)

8.         NPC2 Direction (Vector)

9.         NPC3 Position (Vector)

10.       NPC3 Direction (Vector)

11.       Enemy Alive Sensor

12.       Closest Enemy to Bot Sensor (Vector)

13.       Enemy Last Seen Timer

14.       Ammo

15.       Health

16.       Armour

4.4 Other potential applications that would benefit from the AI

XXXXXXX is a similar FPS which could utilise the AI developed for FPS. From an AI perspective, the only differences are that there are more guns to choose from. It is based on aliens and their physical capabilities are greater enabling them to jump higher, move faster etc. Adapting the AI to this would involve creating more input s to include the guns. Increasing the weight of the jump functionality as it is a more effective tool in this game.

Chapter 5 – Suggested Testing

After testing, the weights used during training can be altered to improve the Bot. Some experiments include:

  • Create a death-match consisting of up to 25 Bots with different fitness values attached and record the highest scores. Some samples of the original Bot should be kept in order to track the rate of improvement. Penalties could be introduced to Bots that are killed.
  • Recording stats for which Bots collect the most health and ammo.
  • Increment sensors and compare the results of each.
  • With the errors from testing, more experiments could be derived.
  • Repeat the same steps for Team Bots, emphasising each of the emotions to investigate which are tactically sound.
  • Once testing has been completed, experiment against human players. Use new test data depending on the results.

Chapter 6 – Conclusions

Due to the complex nature of ANN’s and the man hours required to build them, they have been utilised in too few games. Examples include Colin Mc Rae Rally Series and NERO. No commercial FPS could be found that utilises ANN’s for their Bot technology.

This Bot utilises structures from other experiments [ 4 ], [ 5 ]  and builds on them. Only testing can determine whether this Bot is fully functional. Some networks may have too many inputs whilst others may not have enough. Too much information will confuse the Bots during training and too little will make them inadequate.

There are many advantages to the techniques employed. This Bot is adaptable to new environments. With appropriate training data the Bot is capable of learning complex behaviours. Rudimentary human emotions are depicted. Different tactical plans can be employed. By utilising the same visualisation system as humans and introducing an error component in the aiming, the Bot becomes more realistic. Many game developers give their Bots superhuman capabilities in order to compensate for their inferior intelligence. This sacrifices the believability and this invariably diminishes the player’s fun. This pitfall has also been avoided in the design.

With proper testing and adjustments, this Bot could be employed in any style of FPS.


[ A ] David M. Bourg & Glenn Seeman (2004) AI for Game Developers, O’ Reilly       Publishing, ISBN: 0-596-00555-5.

[ B ] Dr. David King & Dr. Suheyl Ozveren (2007) Artificial Intelligence for Computer         Games, CS1130A07 Course Notes.

[ C ] Mat Buckland (2002) AI Techniques for Game Programming , Premier Press,   ISBN: 1-931841-08-X.

[ D ] Mat Buckland (2005) Programming Game AI by Example, Wordware Publishing,         ISBN 1-55622-078-2.

[ E ] Alex J. Champandard (2003) AI Game Development: Synthetic Creatures with             Learning and Reactive Behaviors, New Riders Publishing, ISBN: 1-5927-3004-3.

[ F ] Cornelius T. Leondes (1998) Algorithms and Architectures, Academic Press,     ISBN: 0-12-443861-X.

[ G ] Penny Baillie-De Byl (2004) Programming Believable Characters for Computer           Games, Charles River Media, ISBN: 1-58450-323-8.


[ 1 ] http://ai-depot.com/FiniteStateMachines/ (Date Accessed 24/10/07)

[ 2 ] http://penguin.ewu.edu/~ainoue/EmoBot/docs/abstract.pdf (Date Accessed 24/10/07)

[ 3 ] http://ai-depot.com/BotNavigation/Obstacle-Introduction.html (Date Accessed      24/10/07)

[ 4 ] http://portal.acm.org/citation.cfm?id=1067343.1067374 (Date Accessed 24/10/07)

[ 5 ] http://www.computing.dcu.ie/~bgorman/CGAMES_07.pdf (Date Accessed 24/10/07)

[ 6 ] http://www.gamasutra.com/features/20001101/woodcock_01.htm (Date Accessed             24/10/07)

[ 7 ] http://ai-depot.com/GameAI/Bot-Introduction.html (Date Accessed  24/10/07)

[ 8 ] http://www.kbs.twi.tudelft.nl/docs/MSc/2001/Waveren_Jean-Paul_van/thesis.pdf (Date Accessed  24/10/07)

[ 9 ] http://www.roaringshrimp.com/WS04-04NCombs.pdf (Date Accessed  24/10/07)

[ 10 ] http://www.cs.wlu.edu/~levy/pubs/AIIDE05OverholtzerC.pdf (Date Accessed      24/10/07)

[ 11 ] http://web.media.mit.edu/~bruce/Site01.data/papers_0184.pdf (Date Accessed      24/10/07)

[ 12] http://aigamedev.com/tutorials/NeuralNetwork.html (Date Accessed 26/10/07)

[ 13 ] http://www.cse.unsw.edu.au/~ypisan/ (Date Accessed 26/10/07)

[ 14 ] http://hdl.handle.net/1842/879 (Date Accessed 26/10/07)

[ 15 ] http://characters.media.mit.edu/Papers/challenges.pdf (Date Accessed 26/10/07)

[ 16 ] http://www-ksl.stanford.edu/people/pdoyle/papers/symposium.pdf (Date Accessed            26/10/07)

[ 17 ] http://aigamedev.com/tutorials/RuleBasedSystem.html (Date Accessed 26/10/07)

[ 18 ] http://fear.sourceforge.net/docs/latest/guide/ (Date Accessed 26/10/07)

[ 19 ] http://neuralnetworks.ai-depot.com/ (Date Accessed 26/10/07)

[ 20 ] http://www.red3d.com/cwr/steer/ (Date Accessed 27/10/07)

[ 21 ] http://aass.oru.se/Agora/FLAR/HFC/home.html (Date Accessed 27/10/07)

Appendix I  – Glossary of Acronyms

NPC    -           Non Playing Character

FPS     -           First Person Shoot-em up

AI        -           Artificial Intelligence

AAS    -           Area Awareness System

Bot      –           Robot

CPU    -           Central Processing Unit

FSM    -           Finite State Machine

GA      -           Genetic Algorithm

NN      -           Neural Network

Ammo -           Ammunition

FOV    -           Field of View

Appendix II – Keywords

artificial neural networks, perceptron, multilayer feed forward, first person shooter, genetic algorithms, machine learning, artificial intelligence, AI Bot training.

The complete report is available here.

VN:F [1.8.8_1072]
Rating: 10.0/10 (1 vote cast)
VN:F [1.8.8_1072]
Rating: 0 (from 0 votes)
An Intelligent NPC in a First Person Shooter10.0101

Related posts:

  1. Direct X Winter Scene
  2. MSc Proposal – Rendering Natural Phenomena on the GPU for Computer Games
  3. Space Out 3D – Playstation 2
  4. Dundonia – Adventure Platform Game
  5. Should External Forms Of Censorship Control Games Production?

Leave a Reply