How We Built LLG: AI-Powered Information Retrieval for Finance!
2/20/20254 min read


Introduction:
At LLG, we’re on a mission to build focused AI applications for finance professionals. Traditionally, investors spend many hours—and thousands of dollars—on researching companies and hiring attorneys to extract critical information from legal and finance materials. We asked ourselves: How can AI automate this?
LLG combines a simple UI with a state-of-the-art Retrieval Augmented Generation (RAG) system, allowing users to upload files, ask questions, and get key takeaways in seconds. In this article, we’ll take you behind the scenes to show how we built this feature and how it’s helping investors make faster, smarter decisions.
The Problem: The Slow and Costly World of Due Diligence
Financial due diligence is a critical but time-consuming process. Investors often sift through many pages of PDFs, spreadsheets, audio recordings, and written materials to find useful information. This process takes a lot of time and professional investors will also pay lawyers, consultants, and accounting firms a lot of money to assist with this.
The pain point is clear.
The Solution: AI-Powered Insights in Seconds
LLG solves this problem by leveraging a RAG system—a cutting-edge AI architecture that combines retrieval-based and generative AI models. Here’s how it works:
Upload Files: Users upload financial documents (e.g., statements, contracts, or reports).
Ask Questions: Users can ask natural language questions like, “What are the key risks in this financial statement?” or “What is the company’s revenue growth rate?”
Get Instant Answers: Our AI extracts relevant information and provides concise, accurate answers in as little as 3 seconds.
Key Takeaways: After uploading a file, our system automatically generates a summary of the most important points, saving users even more time.
The result? Investors can now conduct due diligence faster, cheaper, and with greater confidence.
How We Built It: A Simple Yet Powerful Tech Stack
Building a robust and user-friendly RAG system came with its own set of challenges. Here’s a peek under the hood:
1. The RAG System
Built using Langflow, a low-code platform for creating AI models with Langchain. Langflow allowed us to visually design and test our RAG pipeline, making it easier to iterate and deploy.
Deployed on Hugging Face to make the model accessible to other components. Hugging Face’s infrastructure ensures seamless integration and scalability.
2. Data Storage
Financial data is stored in Astra DB, a vector database optimized for AI applications. Astra DB’s distributed architecture allows us to handle large-scale data retrieval with low latency, ensuring fast and accurate responses.
3. Frontend
The user interface is built with Streamlit, a Python library for creating simple and interactive web apps. Streamlit’s declarative syntax made it easy to build a clean, intuitive UI for non-technical users.
Deployed on Streamlit Cloud for seamless accessibility and scalability.
4. Backend
APIs were built using FastAPI (Python) and deployed on Render to handle communication between components. FastAPI’s asynchronous capabilities ensure high performance, even under heavy load.
5. Hosting
The Streamlit app is embedded in Hostinger, an AI-generated web hosting platform. Hostinger’s easy-to-use interface and affordable pricing made it a great choice for launching quickly.
Challenges and Lessons Learned
One of the biggest challenges was integration. Connecting components like Langflow, Streamlit, Astra DB, and FastAPI required careful planning and problem-solving. For example, ensuring that the vector embeddings generated by Langflow were compatible with Astra DB’s storage format took some trial and error.
We also learned that choosing the right tech stack is about more than just individual tools—it’s about how well they work together. For founders building AI products, our advice is simple: focus on integration first. A well-connected system, even if built with beginner-friendly tools, can deliver incredible results.
Accuracy: The Heart of LLG
At LLG, we know that accuracy is non-negotiable. Investors rely on precise, reliable information to make critical decisions. That’s why we’ve invested heavily in ensuring our RAG system delivers the highest level of accuracy.
Here’s how we achieved it:
Semantic Chunking:
Instead of processing entire documents at once, we break them into smaller, semantically meaningful chunks. This allows our AI to focus on the most relevant sections, improving both speed and accuracy. For example, a 100-page financial statement is split into sections like “Revenue Growth,” “Risk Factors,” and “Operating Expenses.”
Fine-Tuned Embeddings:
We use custom-trained embeddings to better understand the nuances of financial language. By fine-tuning models like OpenAI’s text-embedding-ada-002 on financial datasets, we ensure that our AI can accurately interpret complex terms, jargon, and context.
Semantic Search:
Our retrieval system uses advanced semantic search techniques to find the most relevant information, even when users ask questions in natural language. For example, if a user asks, “What are the company’s liabilities?” our system retrieves sections discussing “debt,” “accounts payable,” and “long-term obligations.”
These techniques work together to deliver highly accurate answers that investors can trust. While we’re proud of our current results, we’re constantly refining our models to push the boundaries of what’s possible.
(Curious about how we fine-tuned our embeddings or implemented semantic chunking? Stay tuned for a deeper dive in a future article!)
Results and Impact
Since launching LLG, we’ve seen how our product can transform the due diligence process:
Speed: Users can extract critical information in 3 seconds, compared to hours or days manually.
Cost Savings: Investors save thousands of dollars by reducing their reliance on attorneys.
Key Takeaways: Our AI-generated summaries provide instant insights, helping users focus on what matters most.
Accuracy: Thanks to semantic chunking, fine-tuned embeddings, and semantic search, our system delivers highly accurate answers that users can trust.
Here’s an example: After uploading a financial statement, our AI extracts key metrics like revenue growth, operating expenses, and risk factors—all in a matter of seconds.
The Future of LLG
While we’re proud of what we’ve built, this is just the beginning. Our roadmap focuses on three key areas:
Improving Accuracy and Performance: We’re continuously refining our AI models to deliver even more precise and reliable insights.
Enhancing the UI: We’re working on a sleeker, more intuitive design to make the user experience even better.
Expanding Functionality: In the future, we aim to automate the entire due diligence process and even proactively notify users about industry events or trends.
Our ultimate goal is to make LLG the go-to tool for financial investors—a smart, reliable partner that accelerates decision-making and unlocks new opportunities.
Conclusion
At LLG, we believe that AI should be simple, accessible, and transformative for all finance professionals. By combining a state-of-the-art RAG system with a user-friendly interface, we’re helping investors save time, reduce costs, and make smarter decisions.
If you’re a financial professional looking to streamline your due diligence process, we’d love to show you how LLG can help. Contact us (support@llg-ai.com) to learn more.
And to fellow founders: if you’re building an AI product, remember that simplicity and integration are key. Start small, iterate quickly, and focus on delivering value to your users.
Insights
The best AI app for public markets investors!
Support
Connect
© LLG AI, Inc.