This application implements a Markov Decision Process (MDP) using the Value Iteration algorithm to determine optimal policies for navigating a grid-based environment with rewards and obstacles. It provides a visual interface to display state utilities, policy paths, and dynamic updates based on user-defined goals and constraints. Users can define sources, destinations, and obstacles, adjust transition probabilities, and visualize how the agent learns to find the optimal path.
- .NET Framework 4.7.2 or higher
- Windows OS (recommended)
- Visual Studio (for development and code modifications)
- C# Compiler (if building from source)
Ensure that the required .NET Framework is installed to run the application without issues.
-
Download the Application:
- Clone the repository or download the ZIP file.
- Extract the ZIP file if downloaded.
-
Run the Application:
- Navigate to the extracted folder:
MDP/bin/Release/
- Double-click on
MDP.exe
to launch the application.
- Navigate to the extracted folder:
No additional installation is required. The app will open with a user-friendly GUI where you can start visualizing MDP policies right away.
- Interactive grid-based environment
- Customizable start points, goals, and obstacles
- Adjustable transition probabilities for stochastic behavior
- Real-time visualization of value iteration and optimal policies
- Data export for path analysis
Feel free to contribute, report issues, or suggest new features!
Parts of this project page were adopted from the Nerfies page.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.