top of page
Search

Smart AI vs Simple AI

Updated: Jul 16

Smart AI vs Simple AI: A Unity Machine Learning Comparison



In this project, I explored the capabilities of Machine Learning (ML) in game development by comparing a traditional rule-based AI (NavMesh) to a learning-based AI agent trained using Unity ML-Agents.

The goal was simple: let the Smart AI (ML Agent) survive and adapt, while the Simple AI (NavMesh Agent) pursued the player using fixed logic. Through multiple training phases and environment iterations, I evaluated how AI can evolve, adapt, and outperform scripted behaviors.


Key Highlights:

  • NavMesh AI used static pathfinding and tagging logic.

  • ML Agent used RayPerceptionSensorComponent3D to detect enemies and collectibles.

  • Reward-based training led to intelligent behaviors: dodging enemies, collecting items, and targeting threats dynamically.


Three Phases of Training:

  1. Phase 1: Basic movement and attack training using random spawn logic.

  2. Phase 2: Added collectible logic to encourage multitasking and reward collection.

  3. Phase 3: Implemented ray perception, camera lock-on, and refined training to stabilize reward and loss metrics.


Tools Used:

  • Unity 2023.2+

  • ML-Agents Toolkit (v0.28+)

  • TensorBoard for monitoring training progress.

  • Anaconda for running training scripts.


Results:

  • The ML-trained agent learned to rotate toward enemies smoothly, manage multiple tasks (combat + item collection), and react dynamically to changing environments.

  • Training graphs showed consistent improvement, proving ML's flexibility and adaptability in gameplay AI.


Future Vision:

  • Introducing procedural environments to test AI generalization.

  • Simulating emotion-like behaviors (e.g., retreating when low on health).

  • Enabling self-play training to evolve two ML agents against each other, inspired by OpenAI's Dota 2 model.


Full Project Documentation (Google Docs)

 
 
 

Comments


bottom of page