Skip to content

Waseem-Baig/3d-model-generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

1 Commit
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

LLaMA-Mesh Text Generator

A web-based text generation application powered by the LLaMA-Mesh model, built with Flask and Transformers.

๐Ÿ“– Overview

This project provides a clean, user-friendly web interface for generating text using the LLaMA-Mesh language model. Users can input prompts and control the length of generated text through an intuitive web form.

โœจ Features

  • Web Interface: Clean, responsive design using Tailwind CSS
  • Text Generation: Powered by the LLaMA-Mesh model via Transformers
  • Customizable Output: Adjustable text length (10-1000 characters)
  • Real-time Processing: Asynchronous text generation with loading states
  • Error Handling: Graceful error display and recovery

๐Ÿ› ๏ธ Technology Stack

  • Backend: Python Flask
  • ML Framework: PyTorch + Transformers
  • Model: LLaMA-Mesh (Zhengyi/LLaMA-Mesh)
  • Frontend: HTML, JavaScript, Tailwind CSS
  • Package Management: pip + virtual environment

๐Ÿ“‹ Requirements

System Requirements

  • RAM: 32GB+ recommended (model is ~15GB)
  • Storage: 20GB+ free space
  • GPU: CUDA-compatible GPU recommended (optional)
  • Python: 3.8 or higher

Dependencies

  • Flask >= 3.1.0
  • torch >= 2.2.0
  • transformers >= 4.30.0
  • Additional dependencies in requirements.txt

๐Ÿš€ Installation

1. Clone the Repository

git clone <your-repository-url>
cd capstone

2. Create Virtual Environment

python -m venv .venv

3. Activate Virtual Environment

Windows:

.venv\Scripts\activate

macOS/Linux:

source .venv/bin/activate

4. Install Dependencies

pip install -r requirements.txt

๐ŸŽฏ Usage

Starting the Application

python app.py

The application will start on http://localhost:5000

Using the Web Interface

  1. Open your browser and navigate to http://localhost:5000
  2. Enter your text prompt in the input field
  3. Adjust the maximum length slider (10-1000 characters)
  4. Click "Generate" to create text
  5. View the generated output below

API Endpoint

You can also use the API directly:

curl -X POST http://localhost:5000/generate \
  -H "Content-Type: application/x-www-form-urlencoded" \
  -d "input_text=Your prompt here&max_length=100"

๐Ÿ“ Project Structure

capstone/
โ”œโ”€โ”€ app.py              # Main Flask application
โ”œโ”€โ”€ index.html          # Web interface template
โ”œโ”€โ”€ requirements.txt    # Python dependencies
โ”œโ”€โ”€ .gitignore         # Git ignore rules
โ”œโ”€โ”€ .venv/             # Virtual environment (ignored)
โ””โ”€โ”€ README.md          # This file

โš™๏ธ Configuration

Model Parameters

You can modify these parameters in app.py:

  • temperature: Controls randomness (0.1-2.0)
  • top_p: Nucleus sampling parameter (0.1-1.0)
  • max_length: Maximum output length
  • num_return_sequences: Number of sequences to generate

Flask Configuration

  • Debug Mode: Enabled by default (debug=True)
  • Host: localhost (127.0.0.1)
  • Port: 5000

๐Ÿ”ง Troubleshooting

Common Issues

1. Memory Error / Segmentation Fault

Problem: Model too large for available RAM Solutions:

  • Use a machine with more RAM (32GB+)
  • Use a smaller model variant
  • Enable model offloading to disk

2. Import Error (Werkzeug/Flask)

Problem: Version compatibility issues Solution:

pip install --upgrade flask werkzeug

3. CUDA Out of Memory

Problem: GPU memory insufficient Solutions:

  • Use CPU-only inference: torch.device('cpu')
  • Reduce batch size
  • Use gradient checkpointing

4. Slow Model Loading

Problem: Large model download/loading time Solutions:

  • Models are cached after first download
  • Use faster internet connection for initial download
  • Consider using model quantization

Performance Optimization

  1. GPU Acceleration: Ensure CUDA is properly installed
  2. Model Caching: Models are cached in ~/.cache/huggingface/
  3. Memory Management: Use torch.no_grad() for inference
  4. Batch Processing: Process multiple requests efficiently

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

๐Ÿ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

๐Ÿ“ž Support

If you encounter any issues or have questions:

  1. Check the Issues page
  2. Review the troubleshooting section above
  3. Create a new issue with detailed information

๐Ÿ”ฎ Future Enhancements

  • Model selection dropdown
  • Batch text generation
  • Export/save functionality
  • User authentication
  • API rate limiting
  • Docker containerization
  • Cloud deployment guides

Note: This application requires significant computational resources due to the large language model. Ensure your system meets the minimum requirements before installation.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published