Million Hands: Baccarat Chat AI

An AI-driven baccarat assistant that answers natural-language questions using empirical data from over 1 million simulated hands.

Overview

Million Hands is a data-driven baccarat assistant built to explore how large-scale simulation data, classical machine learning, and modern LLMs can work together in a production system. The project is backed by a database of over one million simulated baccarat hands generated from a full shoe-level game engine.

The simulation data is used in two ways: directly for empirical querying (streaks, probabilities, side bets), and as training data for a custom RandomForest model that learns outcome tendencies from historical hand sequences. A GPT-4–powered agent orchestrates tool usage, deciding when to query raw data, invoke the trained model, or provide theoretical explanations.

The focus of the project is instructional and exploratory—demonstrating how deterministic simulation, supervised learning, and LLM-based reasoning can be combined into a real, end-to-end AI application across mobile platforms.

Built with

AI inside

Million Hands: Baccarat Chat AI combines large language models, supervised machine learning, and deterministic simulation data within a single system. OpenAI GPT-4 powers the conversational interface, including a tool-using LangChain agent that answers empirical questions by issuing structured SQL queries against the baccarat database, alongside a separate GPT-4 theory mode for explaining rules and mathematical concepts.

In parallel, the app integrates a custom next-hand prediction model trained on over 1,000,000 simulated baccarat hands. The training data is generated at the shoe level using a full game engine: each shoe is shuffled with a recorded random seed (shuffle_seed), a burn card is applied, and hands are dealt using standard baccarat drawing rules. Hand histories reset at each new shoe to prevent sequence leakage across shoes.

The prediction model is a supervised scikit-learn RandomForest trained on realistic per-shoe hand sequences. Input features are derived from hashed n-gram representations of recent outcome sequences, allowing the model to learn conditional outcome tendencies based on local game context. The model returns probability estimates for the next outcome, which are served independently of the LLM and surfaced to the user when predictive mode is enabled.

Tech Stack

Next Steps

The original next step for the project—extending the app to Android—has now been completed. After finishing the iOS version, the Android app was built using VS Code with Cline connected to GPT-5.1, leveraging AI-assisted development to translate functionality and architecture into a fully native Android implementation.

The Android version was completed in approximately five days of focused development and launched on January 6, 2025. The goal was to achieve feature parity while preserving the same data-driven behavior, correctness guarantees, and overall system design established in the original app.

← Back to all projects