Skip to content

This application implements a Markov Decision Process (MDP) using the Value Iteration algorithm to determine optimal policies for navigating a grid-based environment with rewards and obstacles.

Notifications You must be signed in to change notification settings

AnujithM/MDP-ValueIteration-Visualizer.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MDP-ValueIteration-Visualizer

C# .NET Framework License

This application implements a Markov Decision Process (MDP) using the Value Iteration algorithm to determine optimal policies for navigating a grid-based environment with rewards and obstacles. It provides a visual interface to display state utilities, policy paths, and dynamic updates based on user-defined goals and constraints. Users can define sources, destinations, and obstacles, adjust transition probabilities, and visualize how the agent learns to find the optimal path.

🚀 Requirements

  • .NET Framework 4.7.2 or higher
  • Windows OS (recommended)
  • Visual Studio (for development and code modifications)
  • C# Compiler (if building from source)

Ensure that the required .NET Framework is installed to run the application without issues.

⚡ Usage

  1. Download the Application:

    • Clone the repository or download the ZIP file.
    • Extract the ZIP file if downloaded.
  2. Run the Application:

    • Navigate to the extracted folder:
      MDP/bin/Release/
      
    • Double-click on MDP.exe to launch the application.

No additional installation is required. The app will open with a user-friendly GUI where you can start visualizing MDP policies right away.

📊 Features

  • Interactive grid-based environment
  • Customizable start points, goals, and obstacles
  • Adjustable transition probabilities for stochastic behavior
  • Real-time visualization of value iteration and optimal policies
  • Data export for path analysis

Feel free to contribute, report issues, or suggest new features!

Acknowledgments

Parts of this project page were adopted from the Nerfies page.

Website License

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

About

This application implements a Markov Decision Process (MDP) using the Value Iteration algorithm to determine optimal policies for navigating a grid-based environment with rewards and obstacles.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published