Skip to content

dlab-berkeley/AI_Pulse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 

Repository files navigation

D-Lab AI Pulse: Spring 2026

License: CC BY 4.0

AI Pulse is UC Berkeley D-Lab's bi-weekly online workshop series on AI tools for research and academia. Each 50-minute session features a live demo (~30 min) followed by open discussion (~20 min).

No prior experience with AI tools required! Check out D-Lab's Workshop Catalog to browse all workshops.


Next Workshop (April 14, 2026): Running Your Own AI

This session covers the full lifecycle of running your own AI: from downloading a model to your laptop, to fine-tuning it on a server, to deploying it in the cloud through HuggingFace. We cover:

  • What a large language model actually is and how it works under the hood.
  • Installing and running a model locally with Ollama. No internet, no subscription, no API key required.
  • Context windows and quantization: what these mean and why they matter for working with real documents.
  • Fine-tuning with LoRA: adapting a model to your domain using your own data.
  • Deploying your model on HuggingFace: pay-per-use cloud inference, and sharing with collaborators.

As a running application, we show how these tools come together to build a personalized writing partner that knows your field's notation, style, and structure.

Our goal is to demystify what's inside these models and show you that customizing AI for your own research is something you can do with free tools and a Berkeley computing account.


Previous Workshops

Session 6 (March 31, 2026): AI in Science Case Studies

Materials

This session walked through six real case studies of researchers who successfully used AI in their scientific work, across astronomy, biology, social science, theoretical physics, chemistry, and mathematics. For each case we covered the complete workflow, how they customized the AI interaction, and where things went wrong.

Session 5 (March 17, 2026): AI for Teaching, Learning and Collaborating

Tools: NotebookLM | Materials

How can AI help you learn, teach, and work with others? This session explored NotebookLM, Google's source-grounded AI that generates podcasts, flashcards, study guides, and more from your own documents. Answers come from your sources, not the internet. We walked through 9 live demos covering exam prep, literature synthesis, course material creation, policy navigation, and collaborative research workflows.

We also discussed the rise of AI homework agents, AI humanizers, and what they mean for how we think about assessments.

Session 4 (March 10, 2026): LLMs for Qualitative Work

Tools: Gemini, NotebookLM | Materials

AI tools have transformed quantitative research workflows — but qualitative researchers have been largely left out of the conversation. This session explored how LLMs fit into qualitative work through five live demos: grounded document analysis with NotebookLM, dialogical qualitative coding, multimodal analysis of photos and video, structured text extraction from open-ended responses, and piloting research designs with simulated participants.

We also discussed the unique risks LLMs pose for interpretive work and where human judgment remains essential in the analysis loop.

Session 3 (February 24, 2026): AI for Teaching, Learning and Collaborating

Tools: NotebookLM, Khanmigo, Microsoft Study, SciSpace | Materials

How can AI help you learn, teach, and work with others? This session explored NotebookLM, Google's source-grounded AI that generates podcasts, flashcards, study guides, and more from your own documents. Answers come from your sources, not the internet. We walked through 9 live demos covering exam prep, literature synthesis, course material creation, policy navigation, and collaborative research workflows.

We also discussed the rise of AI homework agents, AI humanizers, and what they mean for how we think about assessments.

Session 2 (February 10, 2026): Scientific AI

Tools: Perplexity, Consensus, Elicit, Kosmos | Materials

General-purpose AI has a citation problem: studies show ChatGPT fabricates roughly 1 in 5 academic references. This session walked through specialized research tools designed to solve this: Perplexity for quick context with verified sources, Consensus for evidence synthesis across peer-reviewed literature, and Elicit for systematic reviews and data extraction.

We also took a first look at Kosmos, an autonomous research agent that reads ~1,500 papers and writes ~42,000 lines of code over 12 hours to produce a research report. We discussed when to trust (and not trust) any of these tools.

Session 1 (January 27, 2026): Coding AI

Tools: Claude Code, Gemini CLI | Materials

Our inaugural session introduced AI-powered coding assistants that work directly in the terminal. We demoed Claude Code and Gemini CLI on real research tasks: generating and documenting code, navigating unfamiliar codebases, consolidating messy datasets, and running linear regressions, all through natural language conversation.

The session showed how these tools can save researchers hours on routine programming tasks, even if you're not a software developer.


Future Sessions: AI for Data Analysis, Productivity & Workflow, Customizing Your AI (Tentative)


Resources


About the UC Berkeley D-Lab

D-Lab works with Berkeley faculty, research staff, and students to advance data-intensive social science and humanities research. Our goal at D-Lab is to provide practical training, staff support, resources, and space to enable you to use AI tools for your own research applications.

Visit the D-Lab homepage to learn more about us.

Contributors

  • Bruno Cittolin Smaniotto
  • Tom van Nuenen

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors