Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[WIP] Dataset documentation (microsoft#992)
* fix: Pass str not ParamSpec * fix: Use the the argument string to connect * feat: Add add_parameter_ to compose better Note to self and future reader: an underscore appended to a function name denotes a function whose side-effects needs to be committed! * oopified all the things, tad tired * fix: man I must have been tired * fix: Use list instead of unpacking * add Todo, finish implementation * add todo about metadata * fix: Return 0 instead of none if the dataset is empty * feat: Add subscriber * add example * docs: add example using zmq * fix: Typo in sql * remove junk * docs: polish API, add docstrings * polish up * docs: add examples * remove old tests * add requirements * remove leftovers * remvoe spec * move examples out * clean notebooks * fix: Move experiment id concerns to the PUBLIC api. That is: a dataset needs an exp_id. If it's not passed to the new_dataset then we use the last exp_id. * fix: Create better dirs for merge with qcodes. * fix: integrate with qcodes * fix: make sqlite operations public * fix:many details * feat: Add exp container * fix: Configure container in config * docs: update examples * fix: Remove leftover todo * style: PEP8'ing for great justice * Shuffle around notebooks * [WIP] pave the way for auto-plotting; "inject dependencies" * [WIP] Introduce half-finished plot function * Feature/dataset (microsoft#806) * Add notebook with some benchmarking experiments * Fix add_results and insert_many * Update benchmarking * MILESTONE: first working version of plot_by_id for 1D and 2D * sort axes for 1D plotting * add examples of working quick plotting * feat: add measurements.py file, add Runner * add Measurement object * fix: don't use the type keyword * Add interface for using QCoDeS parameters * [MILESTONE] first notebook with the context manager * add debug logging, some of which should be removed again * Remove some debugging, make _rows_from_datapoints 130 times faster * add adaptive sweep to context manager notebook * typos and log * typo * refactor code and reuse add layouts * Make dataset pass with mypy * remove absolute import from __init__ as it seems to break autodoc * no longer build for python 3.5 * Add some debugging to exp container * Add tests for dataset very basic but working setup and teardown * Validate table name to prevent sql injection * Actually check * improve tests * fix testing if unicode * better test * More tests * remove db * add_result takes a dict from str to single values not list * Improve tests * Correct function sig * remove db which should never have been comitted * remove prints * imporve tests * more tests * remove print statment * Remove unused import * Make sure to close all connections in teardown This fixes issues with test failure due to failure with removing the tmpdir on windows * Correct dac validators (microsoft#906) use the correct validator range (cf. d5a driver) and adjust naming * add publisher from logging branch * Update notebook * typoes * Add simple example of json exporter for the webui: * add notebook with example of exporting with notebook * add optional kwargs at creation time to subscriber * style: change camelCase to snake_case * [WIP] tests for the measurement context manager * add unregister_parameter + test * add some testing for paramspec * change paramspecs paramtypes in test from "number" to "real" * remove old param_spec function * 100% coverage of param_spec * add register_custom_parameter + test * fix: make refactor work * add a number_of_results property to DataSet * remove debug print statement * lowercase paramtype * test datasaver and support/implement array unravelling * validate paramspec names * remove debugging print statement * update test with valid ParamSpec names * add tests for exit/enter-actions and change OrderedDict to list * fully cover add enter/exit actions * cover write_period property in test * avoid infinite recursion in write_period * fully cover unregister_parameter in test * add a station to the datasaver scalar test * mypy * codacy: removed unused imports * make database errors hard errors in the ctx mgr * clean up non-working SQL error test * add test for add_parameter_values * add test for modify_result and insert an exception * sort imports and test CompletedError in modify_results * add exception checks to test_add_data_1d * add SQLiteSettings object to hold settings read at import time * add little test for sqlitesettings * correct docstring and add dependency checks * make insert_many_values consider input length + test * add SQLiteSettings to __init__.py * copy DEFAULT_LIMITS to avoid modifying it * playing with Travis * revert "playing with Travis" * update test_adding_too_many_results * add VERSION to qc.SQLiteSettings * try to use VERSION to make Travis happy * add a failing test to read Travis' sqlite version * cheat with version to check if MAX_COMPOUND_SELECT is to blame * fix typo * add test + remedy for writing a divisor of MAX_VARIABLE_NUMBER * increase a deadline to avoid flakiness on Travis * increase another deadline to avoid flakiness on Travis * replace "id" by "run_id" and "exp_id" * remove unused imports and variables * turn deadlines off for two otherwise flaky tests * remove more unused stuff and a redundant test * remove old double definition * add a few tests for sqlite_base * modify a test to do a double-catch * add functions to get and set user version to be used for db upgrade * add simple test to do a silly upgrade of the database * update error catching in test_atomic_raises * remove unused variable * Add a docstring to Subscriber * squash! Add a docstring to Subscriber * squash! squash! Add a docstring to Subscriber * Add a pedagogical notebook on the Subscriber * Change Subscriber log debug and offset min_count * Update example notebook to use redefined min_count * Add a simple test for dataset subscription * Change snapshot to match exiting structure more closely * Build dataset notebooks too * correct makefile * add dataset notebooks to index * Merge types 'real' and 'integer' into 'numeric' * Update notebooks to latest API changes The changes being: 'real' and 'integer' -> 'numeric' and no id attribute of anything anymore. * Add titles to notebooks * Remove types from context manager API * Add support for ArrayParameters in the DataSaver * [WIP] Add tests for ArrayParameter support * make examples executable * WIP dataset importer for old data * Add dataset as fixtures for tests * use the simple importer * update notebook * add tests of loading old dataset * add property for dataset to datasaveR * Add support for storing metadata to importor * update notebook * Add smoketest of json in dataset * Update add_results to make test_datasaver_array_parameters pass * Disable deadline for test_datasaver_array_parameters * PEP8 one line * Expand docstring for Measurement * Add format_string as attribute to Experiment * Make Measurement name settable and use that for result_table * refactor loading * add more examples to dataset context manager * Speed up plotting functions for later use in data exporting * more efficient sorting on 2d array * update notebook * refactor code to make numpy array export reuseable * Add data exporter to numpy array * update notebook with use of exporter * Add example notebook with real instruments * Tidy up Context Manager notebook * Tidy up Load old data notebook Sphinx is unhappy about png images. When using %matplotlib notebook, that problem is circumvented. * Move Real Instruments example notebook * Change Makefile to execute DataSet example notebooks * Fix typo * docs: Include the Real_instruments subfolder * Fix typo in index.rst * Update Makefile to create needed directories * add notebook with dond * Remove example-loading notebook * Remove subscriber-example notebook * Add jupyter_client to docs_requirements * add plantuml diagram * Make Makefile generate scripts instead of executing * Revert "Make Makefile generate scripts instead of executing" This reverts commit 46ede2b. * Add jupyter to docs_requirements * [temp] Print available jupyter kernels on travis * Stop printing jupyter-kernelspec list on travis * Add scipy to docs_requirements * Give Travis 20 times longer to execute each notebook * Increase hypothesis deadline for combined loop test * add notebook with dond * add plantuml diagram * Move dataset spec to documentation * add diagram to docs * willams document as converted by pandoc * refactor to improve rst * add more diagrams * Move figures to subfolder * add new section to fill out * fix rst syntax in spec * typos * update dataset diagram * fix typos in docs * start writing text * Add subscriptions to Measurement object (+test¬ebook) * Update logging for subscribers in data_set * Replace spaces with underscores in notebook name * Tweak some test parameters in Measurement test * Fix mypy issues in measurements.py * Tweak test parameters again * Add some more explanation about the dataset * Fixing a few typos in dataset_design.rst
- Loading branch information