Chat Fusion is a contextual chatbot project designed to maintain conversation history and provide coherent, context-aware responses using advanced natural language processing techniques. The project leverages OpenAI's GPT model and is implemented using Python and FastAPI for robust backend support.
- Maintains a history of up to 10 recent messages for contextual awareness.
- Implements OpenAI's GPT-4 model for generating conversational responses.
- FastAPI integration for efficient REST API handling.
- Error handling for robust interaction.
chat_fusion/
├── app/
│ ├── main.py # FastAPI application and chatbot integration
│ └── index.html # Frontend for user interaction
├── requirements.txt # Python dependencies
└── README.md # Project documentation
- Python 3.8+
- An OpenAI API key with access to GPT models
- Basic understanding of Python and FastAPI
-
Clone the Repository:
git clone https://github.com/yourusername/chat-fusion.git cd chat-fusion
-
Create a Virtual Environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install Dependencies:
pip install -r requirements.txt
-
Set Up Environment Variables: Create a
.env
file in the project root and add your OpenAI API key:OPENAI_API_KEY=your_openai_api_key
-
Run the Application:
uvicorn app.main:app --reload
-
Access the Application: Open your browser and navigate to
http://127.0.0.1:8000
.
- Frontend: Interact with the chatbot via the provided HTML interface.
- API:
Send a POST request to the
/chat
endpoint with user input to receive a response.
Example:
curl -X POST http://127.0.0.1:8000/chat \
-H "Content-Type: application/json" \
-d '{"user_input": "Hello!"}'
Response:
{
"response": "Hi! How can I assist you today?"
}
- Integration with a database for persistent conversation storage.
- Improved error handling and logging.
- Support for multiple GPT model versions.
This project is licensed under the MIT License. See the LICENSE file for details.