Skip to content

This project utilizes the TinyLLaMA foundation model for generating blog content, modified through prompt templates to guide and structure the outputs effectively. To optimize cost efficiency, the model is stored and accessed locally instead of relying on paid APIs, significantly reducing ongoing expenses—since storage costs are considerably lower

Notifications You must be signed in to change notification settings

boomshineking/tinyllama-blog-generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TINY LLAMA 🦙 blog generator (https://huggingface.co/spaces/Richard9905/Richards_Blog_generator) <-- click this link to check out my deployed model

Project Overview

Imagine being able to generate entire blog articles without relying on external servers or APIs. That's exactly what this project delivers. Using the powerful Llama 2 7B GGML model locally, and crafting a thoughtful prompt template, this application generates full-fledged blog posts—right from your machine. An interactive Streamlit interface ties everything together, making the experience intuitive and accessible.

Features

  • 📄 Custom-designed Prompt Engineering to guide blog generation effectively.
  • 🖥️ Offline Inference with the Llama 2 7B GGML model, ensuring speed and privacy.
  • 🚀 A responsive Streamlit Web Application for a smooth user experience.
  • ⚡ No internet needed during generation — fast, reliable, and completely local.

Tech Stack

  • Llama 2 7B GGML (offline model)
  • Python 3.9+
  • Streamlit for the UI
  • Prompt Engineering principles

How It Works

First, the user inputs a simple topic or keyword inside the Streamlit app.
Then, behind the scenes, a crafted prompt template reshapes this input into a form that Llama 2 can expand creatively. The model generates a coherent, detailed blog post in response—sometimes poetic, sometimes sharp—depending on the prompt! Finally, the text output is displayed immediately on the Streamlit page for review, editing, or use.

Setup Instructions

Before starting, ensure the Llama 2 7B GGML model is available locally on your system.

  1. Clone the repository:

    https://github.com/boomshineking/llama2-blog-generator
    cd llama2-blog-generator
  2. (Optional but recommended) Set up a virtual environment:

    python -m venv venv
    source venv/bin/activate   # Windows users: venv\Scripts\activate
  3. Install all necessary packages:

    pip install -r requirements.txt
  4. Launch the Streamlit app:

    streamlit run app.py

Within seconds, your local blog generator will be live!

Screenshots

Screenshot 2025-04-24 224234 Screenshot 2025-04-28 163943

Future

Roadmap

  • Introduce selectable blog writing styles (e.g., technical, casual, storytelling).
  • Improve prompt engineering templates to further boost creativity and coherence.

License

This repository is provided for educational and personal experimentation purposes. Commercial usage should adhere to the licensing terms associated with Llama 2.

🚀 Let's Connect

Curious about how it all works? Want to collaborate? Feel free to reach out—I'm always up for building something awesome! email [email protected]

About

This project utilizes the TinyLLaMA foundation model for generating blog content, modified through prompt templates to guide and structure the outputs effectively. To optimize cost efficiency, the model is stored and accessed locally instead of relying on paid APIs, significantly reducing ongoing expenses—since storage costs are considerably lower

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages