Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
146 changes: 121 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,121 @@
# About The Hackathon
The MoroccoAI InnovAI Hackathon is a unique opportunity for AI enthusiasts, professionals, and innovators to collaborate and create transformative AI-based solutions addressing real-life challenges in Morocco and across Africa. As part of the annual MoroccoAI Annual Conference, this hackathon is set under the theme “Driving the Future of Innovation Through AI”, inspiring participants to harness AI’s capabilities to make a meaningful societal impact. Participants will join teams to develop Proof of Concepts (PoCs) using applications or APIs that address challenges in various domains. Education, Healthcare, Environment, Finance or Customer Services .

In line with MoroccoAI’s mission, this hackathon centers around “Driving the Future of Innovation Through AI”. AI has the power to redefine industries, address community needs, and propel sustainable growth. Through this event, participants will dive into AI’s potential by developing impactful solutions that address challenges unique to Morocco and Africa in fields such as agriculture, education, health, and finance, fostering innovation in response to real-world needs.

# The Challenge
Connect with the MoroccoAI community, join teams and brainstorm ideas then come up with a project that leverages AI in 5 areas of focus:
* Innovation
* Healthcare
* Environment
* Finance
* CustomerServices

# Mentorship
Join the Hackathon server on discord and meet the mentors to learn more about their proposed projects.

# Why should you participate in this Hackathon?
* Hands-on experience in AI project development that targets relevant issues in Morocco and Africa.
* Mentorship and networking opportunities with experts and peers in the AI community.
* Showcase their solutions to a jury of AI specialists at the awards ceremony, creating visibility and opportunities for further development.
* Win great prizes offered by MoroccoAI's sponsors
* Obtain your MoroccoAI certificate of recognition

# For more information
https://morocco.ai/events/conferences/MoroccoAI-Conference-2024/pages/hackathon.html
# MedConnect: Revolutionizing Healthcare in Africa

## Introduction
**MedConnect** is an innovative AI and blockchain-powered healthcare platform designed to tackle some of the most pressing issues facing healthcare in Africa. Our platform leverages **AI** to assist healthcare professionals in making informed decisions and **blockchain** to ensure secure, transparent, and decentralized data sharing.

By enhancing access to medical expertise and improving healthcare collaboration, **MedConnect** aims to bridge the gap between underserved populations and quality healthcare, thus contributing to global health equity.

---

## Problem Statement

Africa faces numerous healthcare challenges, including:

1. **Limited Access to Healthcare**: 60% of the African population lacks reliable healthcare services.
2. **Fragmented Medical Records**: Medical data is scattered across various institutions, often leading to inefficiencies and errors in patient care.
3. **Insecure Communication**: Sensitive patient data is often transmitted insecurely, risking breaches of privacy.
4. **Shortage of Specialists**: Many rural areas lack access to specialized healthcare professionals, leaving local doctors struggling to provide accurate care.

These issues contribute to a significant disparity in healthcare delivery across the continent.

---

## MedConnect’s Solution

**MedConnect** integrates **AI** and **blockchain** to address these challenges and provide a sustainable solution to healthcare problems in Africa.

### 1. **Generative AI Assistance**
Our platform leverages cutting-edge **Generative AI** to help healthcare professionals quickly generate evidence-based medical reports, diagnostic recommendations, and even assist in image classification (e.g., MRI scans).

### 2. **Blockchain Technology**
Using **blockchain**, we create a decentralized system that ensures medical data is shared securely and transparently. Our **smart contracts** enforce strict access controls, allowing only authorized professionals to access sensitive information.

### 3. **HealthSphere MediBot**
The **HealthSphere MediBot** is an AI-powered assistant that helps healthcare professionals in decision-making, offering insights, recommendations, and medical reports in real-time.

<img src="THE-M2M-SQUAD\Frontend\assets\AM1.jpeg" alt="Image 1" width="300">
---

## Key Features

- **AI-Powered Diagnostics**: From image classification to diagnostic support, AI tools assist doctors in making faster, more accurate decisions.
- **Efficient Communication**: Facilitates secure, transparent sharing of medical data across institutions and specialists.
- **Access to Specialists**: Improves access to medical experts, even in remote areas.
- **Global Health Equity**: MedConnect bridges the gap between underserved populations and high-quality healthcare.

---

## Impact

By leveraging MedConnect’s platform, we aim to:

- **Enhance Access to Expertise**: Rural doctors and patients can receive real-time expert advice from specialists located anywhere on the continent.
- **Improve Collaboration**: Doctors can collaborate more effectively by securely sharing case data and treatment plans.
- **Faster, More Accurate Diagnoses**: AI-powered tools help speed up the diagnosis process, ultimately improving patient outcomes.
- **Bridge Healthcare Gaps**: We aim to level the playing field by providing underserved populations with access to quality healthcare via technology.

---

## Business Model

MedConnect will generate revenue through the following channels:

- **Subscription Plans**: B2B subscription for AI diagnosis, analytics, and advanced tools.
- **Transaction Fees**: Micro-fees for expert consultations and second opinions.
- **Institutional Partnerships**: Paid partnerships with hospitals, universities, and NGOs for custom healthcare solutions.

<img src="THE-M2M-SQUAD\Frontend\assets\AM.jpeg" alt="Image 1" width="300">

---

## Demo and Resources

- **MVP Demo URL:**: [MedConnect AI App](https://medconnectbot.netlify.app)
- **Demo Video URL** : [Demo](https://drive.google.com/file/d/1cN5GZNGTS9g0kFdKPf43vpXClizTNvwn/view?usp=sharing)
- **GitHub Repository URL** : [URL](https://github.com/Hamzar2/THE-H2H-SQUAD.git)
- **Pitch Deck (PDF)** : [MEDCONNECT](https://drive.google.com/file/d/1VoElZCfaBF_SYlG56uU23B-KHQhhdF1z/view?usp=sharing)
- **Pitch Video URL** : [Pitch](https://www.loom.com/share/de580723e31e40bd88b6ef41eb2cfa8f?sid=4ac5beec-d512-4789-b306-4ac64c6a58dd)

For further details and to get started with the project, please follow the setup instructions below.

---

## How to Run the server

Follow these steps to set up and run the **Flask app**:

1. **Install Required Dependencies**:
pip install -r requirements.txt

2. **Set Up Environment Variables**:
Create a .env file in the root of the project.
Add the necessary configuration settings:
API_KEY=<your_secret_key>


3. **Run the Flask App**:
python app.py

Access the Application: Open your browser and navigate to http://127.0.0.1:5000/ to see the app in action.


## How to Run the Vite project

1. **Install Node.js (if not already installed)**:

2. **Set Up Environment Variables**:
Create a .env file in the root of the project.
Add the necessary configuration settings:
VITE_API_URL_LOCALHOST=http://127.0.0.1:5000/

3. **Install Dependencies**
npm install

4. **Run the Vite Development Server**:
npm run dev

## Conclusion

**MedConnect** represents a groundbreaking step toward solving Africa’s healthcare challenges using **AI** and **blockchain** technologies. By providing a secure, transparent platform for healthcare professionals, **MedConnect** dramatically improves access to medical expertise, collaboration, and patient outcomes.

Join us in transforming healthcare in Africa and making a lasting impact on global health equity.
4 changes: 4 additions & 0 deletions THE-M2M-SQUAD/Backend/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
classfication.py
report.html

.env
242 changes: 242 additions & 0 deletions THE-M2M-SQUAD/Backend/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,242 @@
import os
import logging
from flask import Flask, request, jsonify, Response
from flask_cors import CORS
from huggingface_hub import InferenceClient
from transformers import pipeline
from PIL import Image
from dotenv import load_dotenv
from io import BytesIO
import requests
from ultralyticsplus import YOLO, postprocess_classify_output

# Load environment variables
load_dotenv()
API_URL = os.getenv("API_URL")

# Initialize Flask app
app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*"}})
logging.basicConfig(level=logging.INFO)

# Initialize Hugging Face API client
client = InferenceClient(api_key=API_URL)

# Load ML models
brain_classifier = pipeline("image-classification", model="Devarshi/Brain_Tumor_Classification")
Alzheimer_classifier = pipeline("image-classification", model="evanrsl/resnet-Alzheimer")
chest_xray_classifier = YOLO('keremberke/yolov8m-chest-xray-classification')

# set model parameters
chest_xray_classifier.overrides['conf'] = 0.25

# Hugging Face BLIP model API setup
BLIP_API_URL = "https://api-inference.huggingface.co/models/Salesforce/blip-image-captioning-large"
HEADERS = {"Authorization": f"Bearer {API_URL}"}

model_details = None


@app.route('/health', methods=['GET'])
def health_check():
"""
Endpoint for health checks.
"""
return jsonify({"status": "healthy"}), 200


def estimate_token_count(text):
"""
Approximate the token count in the given text.
"""
return len(text.split())

def truncate_chat_history(chat_history, token_limit):
"""
Truncate the chat history to fit within the token limit.
"""
truncated_history = []
current_token_count = 0

for turn in chat_history:
if 'role' in turn and 'content' in turn:
turn_tokens = estimate_token_count(turn['content'])
if current_token_count + turn_tokens <= token_limit:
truncated_history.append(turn)
current_token_count += turn_tokens
else:
break # Stop adding if token limit is reached
return truncated_history

@app.route('/chat', methods=['POST'])
def chat():
"""
Chat endpoint to handle AI-driven conversations.
"""
try:
# Validate the request
if not request.is_json:
return jsonify({"error": "Content-Type must be application/json"}), 400

data = request.get_json()
if 'messages' not in data or 'chatHistory' not in data:
return jsonify({"error": "Missing 'messages' or 'chatHistory' in request body"}), 400

# Extract chat history and current query
chat_history = data['chatHistory']
current_query = data['messages']

# Truncate chat history to fit within 4000 tokens
MAX_TOKENS = 4000
truncated_history = truncate_chat_history(chat_history, MAX_TOKENS)

# Construct the medical report prompt
medical_report_prompt = {
"role": "assistant",
"content": (
"You are a Medical AI Assistant tasked to help healthcare professionals make informed decisions. "
"Your role is to provide accurate, evidence-based medical reports, recommendations, and insights. "
"Follow standard medical reporting formats, ensure clarity and conciseness, and provide actionable suggestions. "
"Take into account the chat history and the current classification results (if any).\n\n"

"Here is the chat history for reference:\n\n"
)
}

# Add truncated chat history to the prompt
for i, turn in enumerate(truncated_history):
medical_report_prompt["content"] += f"**Turn {i+1}:**\n**{turn['role'].capitalize()}:** {turn['content']}\n\n"


classification_message = next(
(msg['content'] for msg in current_query if 'Image Classification:' in msg['content']), None
)

if classification_message:
medical_report_prompt["content"] += f"Image Classification Result: {classification_message}\n\n"
medical_report_prompt["content"] += "Please generate a medical report based on the result."
medical_report_prompt["content"] += (
"When creating the medical report:\n"
"- Base your findings on the provided classification results, if available.\n"
"- Include potential diagnoses, detailed explanations, recommendations for follow-up actions, and treatment options.\n"
"- Ensure your recommendations are grounded in reputable medical knowledge and clearly state any uncertainties.\n\n"

"### Referencing Guidelines:\n"
"- Include at least **two to three insightful references** from credible medical sources, such as PubMed, WHO, CDC, or similar platforms.\n"
"- Present the references as clickable markdown links (e.g., `[description](URL)`).\n"
"- Ensure references are directly relevant to the discussed findings and support your conclusions with recent and authoritative data.\n"
"- If referencing studies or guidelines, briefly summarize their relevance to your conclusions.\n\n"

"### Example of References in the Report:\n"
"- Potential Diagnosis: Alzheimer's Disease\n"
" Reference: [Alzheimer's Diagnosis and Treatment Guidelines - WHO](https://www.who.int/alzheimers)\n\n"
"- Potential Treatment: Chemotherapy for Brain Tumors\n"
" Reference: [PubMed: Advances in Brain Tumor Treatments](https://pubmed.ncbi.nlm.nih.gov/12345678/)\n\n"

"### Generate the Report:\n"
"Based on the classification results and chat history, write a detailed medical report that includes:\n"
"1. An overview of the findings.\n"
"2. Possible diagnoses with explanations.\n"
"3. Recommendations for further investigation or treatment.\n"
"4. Relevant and insightful references to support your conclusions.\n"
)
medical_report_prompt["content"] += f'detailes about the model : {model_details}'
else:
medical_report_prompt["content"] += "Provide guidance for further steps or clarification if needed."

classification_message = None

current_query.append(medical_report_prompt)

# Generate response from the model
def generate_response():
stream = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct",
messages=current_query,
max_tokens=4090,
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content

return Response(generate_response(), content_type='text/plain')

except Exception as e:
logging.error(f"Server error: {str(e)}")
return jsonify({"error": str(e)}), 500



@app.route('/classify-image', methods=['POST'])
def classify_image():
"""
Endpoint to classify uploaded images using pretrained models.
"""
if 'image' not in request.files:
return jsonify({"error": "No image file provided"}), 400

image_file = request.files['image']

try:
image = Image.open(image_file)

# Step 1: Generate image description
buffer = BytesIO()
image.save(buffer, format="JPEG")
buffer.seek(0)
response = requests.post(BLIP_API_URL, headers=HEADERS, data=buffer.read())
response.raise_for_status()
description_response = response.json()
description = description_response[0]['generated_text']

logging.info(f"Image description: {description}")

# Determine the type of image
if "chest" in description.lower():
model_details = (
"Chest X-ray :\n"
"Model: keremberke/yolov8m-chest-xray-classification\n"
"Dataset: NIH Chest X-ray Dataset"
)
# Step 2: Classify the image
results = chest_xray_classifier.predict(image)
processed_result = postprocess_classify_output(chest_xray_classifier, result=results[0])
probs_tensor = results[0].probs.data # Access the tensor containing the probabilities
probs_list = probs_tensor.tolist()
logging.info(f"Image description: {str(processed_result)}")
return jsonify({"label": str(processed_result), "score": probs_list}), 200

elif "brain" in description.lower():
model_details = (
"Alzheimer :\n"
"Model: evanrsl/resnet-Alzheimer\n"
"Dataset: NIH Chest X-Ray Dataset"
"Brain_Tumor :\n"
"Model: Devarshi/Brain_Tumor_Classification\n"
"Dataset: RSNA-MICCAI Brain Tumor Radiogenomic Classification Challenge"
)
# Step 2: Classify the image
image = image.resize((256, 256))
results_brain = brain_classifier(image)
label_b = results_brain[0]['label']
score_b = results_brain[0]['score']

results_Alzheimer = Alzheimer_classifier(image)
label_a = results_Alzheimer[0]['label']
score_a = results_Alzheimer[0]['score']

return jsonify({"label": f" : Alzheimer --> {label_a} ; Brain tumor --> {label_b}" , "score": f" : {label_a} --> {score_a} ; {label_b} --> {score_b}"}), 200
else : return jsonify({"label": "Unrecognized image type", "score": None}), 200




except requests.exceptions.RequestException as e:
return jsonify({"error": f"Image description API error: {str(e)}"}), 500
except Exception as e:
return jsonify({"error": str(e)}), 500


if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Loading