This repository has been archived by the owner on Nov 11, 2019. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 38
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #24 from willkg/missing-conferences
Add missing conferences
- Loading branch information
Showing
834 changed files
with
23,866 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
{ | ||
"title": "PyData Berlin 2014", | ||
"description": "PyData conferences are a gathering of users and developers of data analysis tools in Python. The goals are to provide Python enthusiasts a place to share ideas and learn from each other about how best to apply the language and tools to ever-evolving challenges in the vast realm of data management, processing, analytics, and visualization. ", | ||
"url": "http://pydata.org/berlin2014/", | ||
"slug": "pydata-berlin-2014", | ||
"start_date": "2014-07-25" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/abby-a-django-app-to-document-your-ab-tests.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3063, | ||
"category": "PyData Berlin 2014", | ||
"slug": "abby-a-django-app-to-document-your-ab-tests", | ||
"title": "ABBY: A Django app to document your A/B tests", | ||
"summary": "", | ||
"description": "ABBY is a Django app that helps you manage your A/B tests. The main objective is to document all tests happening in your company, in order to better understand which measures work and which don't. Thereby leading to a better understanding of your product and your customer. ABBY offers a front-end that makes it easy to edit, delete or create tests and to add evaluation results. Further, it provides a RESTful API to integrate directly with our platform to easily handle A/B tests without touching the front-end. Another notable feature is the possibility to upload a CSV file and have the A/B test auto-evaluated, although this feature is considered highly experimental. At Jimdo, a do-it-yourself website builder, we have a team of about 180 people from different countries and with professional backgrounds just as diverse. Therefore it is crucial to have tools that allow having a common perspective on the tests. This facilitates having data informed discussions and to deduce effective solutions. In our opinion tools like ABBY are cornerstones to achieve the ultimate goal of being a data-driven company. It enables all our co-workers to review past and plan future tests to further improve our product and to raise the happiness of our customers. The proposed talk will give a detailed overview of ABBY, which eventually will be open-sourced, and its capabilities. I will further discuss the motivation behind the app and the influence it has on the way our company is becoming increasingly data driven.", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/Vx9UCD6V7y4/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20249.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=Vx9UCD6V7y4", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=Vx9UCD6V7y4", | ||
"tags": [], | ||
"speakers": [ | ||
"Andy Goldschmidt" | ||
], | ||
"recorded": "2014-07-27" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/algorithmic-trading-with-zipline.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3068, | ||
"category": "PyData Berlin 2014", | ||
"slug": "algorithmic-trading-with-zipline", | ||
"title": "Algorithmic Trading with Zipline", | ||
"summary": "", | ||
"description": "PyData Berlin 2014 - Python is quickly becoming the glue language which holds together data science and related fields like quantitative finance. Zipline is a BSD-licensed quantitative trading system which allows easy backtesting of investment algorithms on historical data. The system is fundamentally event-driven and a close approximation of how live-trading systems operate. Moreover, Zipline comes \"batteries included\" as many common statistics like moving average and linear regression can be readily accessed from within a user-written algorithm. Input of historical data and output of performance statistics is based on Pandas DataFrames to integrate nicely into the existing Python eco-system. Furthermore, statistic and machine learning libraries like matplotlib, scipy, statsmodels, and sklearn integrate nicely to support development, analysis and visualization of state-of-the-art trading systems. Zipline is currently used in production as the backtesting engine powering Quantopian.com -- a free, community-centered platform that allows development and real-time backtesting of trading algorithms in the web browser.", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/Qva7uxmOZuA/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20250.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=Qva7uxmOZuA", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=Qva7uxmOZuA", | ||
"tags": [], | ||
"speakers": [ | ||
"Thomas Wiecki" | ||
], | ||
"recorded": "2014-07-26" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/building-the-pydata-community.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3059, | ||
"category": "PyData Berlin 2014", | ||
"slug": "building-the-pydata-community", | ||
"title": "Building the PyData Community", | ||
"summary": "", | ||
"description": "", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/d9Qm3PPoYNQ/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20261.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=d9Qm3PPoYNQ", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=d9Qm3PPoYNQ", | ||
"tags": [], | ||
"speakers": [ | ||
"Travis Oliphant" | ||
], | ||
"recorded": "2014-07-27" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/commodity-machine-learning.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3064, | ||
"category": "PyData Berlin 2014", | ||
"slug": "commodity-machine-learning", | ||
"title": "Commodity Machine Learning", | ||
"summary": "", | ||
"description": "", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/kX5jrFqryAE/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20262.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=kX5jrFqryAE", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=kX5jrFqryAE", | ||
"tags": [], | ||
"speakers": [ | ||
"Andreas Mueller" | ||
], | ||
"recorded": "2014-07-27" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/conda-a-cross-platform-package-manager-for-any-b-0.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3057, | ||
"category": "PyData Berlin 2014", | ||
"slug": "conda-a-cross-platform-package-manager-for-any-b-0", | ||
"title": "Conda: a cross-platform package manager for any binary distribution", | ||
"summary": "", | ||
"description": "Conda is an open source package manager, which can be used to manage binary packages and virtual environments on any platform. It is the package manager of the Anaconda Python distribution, although it can be used independently of Anaconda. We will look at how conda solves many of the problems that have plagued Python packaging in the past, followed by a demonstration of its features.\r\n We will look at the issues that have plagued packaging in the Python ecosystem in the past, and discuss how Conda solves these problems. We will show how to use conda to manage multiple environments. Finally, we will look at how to build your own conda packages.", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/o47Nndkwffc/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20275.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=o47Nndkwffc", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=o47Nndkwffc", | ||
"tags": [], | ||
"speakers": [ | ||
"Ilan Schnell" | ||
], | ||
"recorded": "2014-07-27" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/data-oriented-programming.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3073, | ||
"category": "PyData Berlin 2014", | ||
"slug": "data-oriented-programming", | ||
"title": "Data Oriented Programming", | ||
"summary": "", | ||
"description": "Computers have traditionally been thought as tools for performing computations with numbers. Of course, its name in English has a lot to do with this conception, but in other languages, like the french 'ordinateur' (which express concepts more like sorting or classifying), one can clearly see the other side of the coin: computers can also be used to extract (usually new) information from data. Storage, reduction, classification, selection, sorting, grouping, among others, are typical operations in this 'alternate' goal of computers, and although carrying out all these tasks does imply doing a lot of computations, it also requires thinking about the computer as a different entity than the view offered by the traditional von Neumann architecture (basically a CPU with memory). In fact, when it is about programming the data handling efficiently, the most interesting part of a computer is the so-called hierarchical storage, where the different levels of caches in CPUs, the RAM memory, the SSD layers (there are several in the market already), the mechanical disks and finally, the network, are pretty much more important than the ALUs (arithmetic and logical units) in CPUs. In data handling, techniques like data deduplication and compression become critical when speaking about dealing with extremely large datasets. Moreover, distributed environments are useful mainly because of its increased storage capacities and I/O bandwidth, rather than for their aggregated computing throughput. During my talk I will describe several programming paradigms that should be taken in account when programming data oriented applications and that are usually different than those required for achieving pure computational throughput. But specially, and in a surprising turnaround, how the amazing amount of computational power in modern CPUs can also be useful for data handling as well.", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/KhJSg_rSzj8/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20260.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=KhJSg_rSzj8", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=KhJSg_rSzj8", | ||
"tags": [], | ||
"speakers": [ | ||
"Francesc Alted" | ||
], | ||
"recorded": "2014-07-26" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/dealing-with-complexity.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3081, | ||
"category": "PyData Berlin 2014", | ||
"slug": "dealing-with-complexity", | ||
"title": "Dealing With Complexity", | ||
"summary": "", | ||
"description": "", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/1_oU4qW7I9M/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20263.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=1_oU4qW7I9M", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=1_oU4qW7I9M", | ||
"tags": [], | ||
"speakers": [ | ||
"Jean-Paul Schmetz" | ||
], | ||
"recorded": "2014-07-26" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/driving-moores-law-with-python-powered-machine-l.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3075, | ||
"category": "PyData Berlin 2014", | ||
"slug": "driving-moores-law-with-python-powered-machine-l", | ||
"title": "Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspective", | ||
"summary": "People talk about a Moore's Law for gene sequencing, a Moore's Law for software, etc. This is talk is about *the* Moore's Law, the bull that the other \"Laws\" ride; and how Python-powered ML helps drive it. How do we keep making ever-smaller devices? How do we harness atomic-scale physics? Large-scale machine learning is key. The computation drives new chip designs, and those new chip designs are used for new computations, ad infinitum. High-dimensional regression, classification, active learning, optimization, ranking, clustering, density estimation, scientific visualization, massively parallel processing -- it all comes into play, and Python is powering it all.", | ||
"description": "", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/Jm-eBD9xR3w/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20271.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=Jm-eBD9xR3w", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=Jm-eBD9xR3w", | ||
"tags": [], | ||
"speakers": [ | ||
"Trent McConaghy" | ||
], | ||
"recorded": "2014-07-26" | ||
} |
33 changes: 33 additions & 0 deletions
33
data/pydata-berlin-2014/videos/exploratory-time-series-analysis-of-nyc-subway-da.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
{ | ||
"id": 3065, | ||
"category": "PyData Berlin 2014", | ||
"slug": "exploratory-time-series-analysis-of-nyc-subway-da", | ||
"title": "Exploratory Time Series Analysis of NYC Subway Data", | ||
"summary": "", | ||
"description": "What questions arise during a quick model assessment? In this hands-on-tutorial we want to cover the whole chain from preparing data to choosing and fitting a model to properly assessing the quality of a predictive model. Our dataset in this tutorial are the numbers of people entering and exiting New York subway stations. Among other ways of building a predictive model, we introduce the python package pydse ( http://pydse.readthedocs.org/ ) and apply it to the dataset in order to derive the parameters of an ARMA-model (autoregressive moving average). At the end of the tutorial we evaluate the models and examine the strengths and weaknesses of various ways to measure the accuracy and quality of a predictive model.", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/U4p46XdXy6A/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20270.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=U4p46XdXy6A", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=U4p46XdXy6A", | ||
"tags": [], | ||
"speakers": [ | ||
"Felix Marczinowski", | ||
"Philipp Mack", | ||
"S\u00f6nke Niekamp" | ||
], | ||
"recorded": "2014-07-26" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/exploring-patent-data-with-python.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3058, | ||
"category": "PyData Berlin 2014", | ||
"slug": "exploring-patent-data-with-python", | ||
"title": "Exploring Patent Data with Python", | ||
"summary": "Experiences from building a recommendation engine for patent search using pythonic NLP and topic modeling tools such as Gensim.", | ||
"description": "", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/LWYiF31jiZ0/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20251.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=LWYiF31jiZ0", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=LWYiF31jiZ0", | ||
"tags": [], | ||
"speakers": [ | ||
"Franta Polach" | ||
], | ||
"recorded": "2014-07-27" | ||
} |
31 changes: 31 additions & 0 deletions
31
data/pydata-berlin-2014/videos/extract-transform-load-using-metl.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
{ | ||
"id": 3082, | ||
"category": "PyData Berlin 2014", | ||
"slug": "extract-transform-load-using-metl", | ||
"title": "Extract Transform Load using mETL", | ||
"summary": "", | ||
"description": "mETL is an ETL package written in Python which was developed to load elective data for Central European University. Program can be used in a more general way, it can be used to load practically any kind of data to any target. Code is open source and available for anyone who want to use it. The main advantage to configurable via Yaml files and You have the possibility to write any transformation in Python and You can use it natively from any framework as well. We are using this tool in production for many of our clients and It is really stable and reliable. The project has a few contributors all around the world right now and I hope many developer will join soon. I really want to show you how you can use it in your daily work. In this tutorial We will see the most common situations: - Installation - Write simple Yaml configration files to load CSV, JSON, XML into MySQL or PostgreSQL Database, or convert CSV to JSON, etc. - Add tranformations on your fields - Filter records based on condition - Walk through a directory to feed the tool - How the mapping works - Generate Yaml configurations automatically from data source - Migrate a database to another database", | ||
"quality_notes": "", | ||
"language": "English", | ||
"copyright_text": "http://creativecommons.org/licenses/by/3.0/", | ||
"thumbnail_url": "http://i.ytimg.com/vi/NOGXdKbB-gQ/hqdefault.jpg", | ||
"duration": null, | ||
"videos": [ | ||
{ | ||
"url": "http://video.ep14.c3voc.de/20253.mp4", | ||
"length": null, | ||
"type": "mp4" | ||
}, | ||
{ | ||
"url": "http://www.youtube.com/watch?v=NOGXdKbB-gQ", | ||
"length": 0, | ||
"type": "youtube" | ||
} | ||
], | ||
"source_url": "http://www.youtube.com/watch?v=NOGXdKbB-gQ", | ||
"tags": [], | ||
"speakers": [ | ||
"Bence Faludi" | ||
], | ||
"recorded": "2014-07-26" | ||
} |
Oops, something went wrong.