PSLC DataShop provides two main services to the learning science community:
- a central repository to secure and store research data
- a set of analysis and reporting tools
Researchers can rapidly access standard reports such as learning curves, as well as browse data using the interactive web application. To support other analyses, DataShop can export data to a tab-delimited format compatible with statistical software and other analysis packages.
Monday, 9 January 2017
Attention! DataShop downtime for patch
DataShop is going to be down for an hour beginning at 8:00am EST on Tuesday, January 10, 2017 while our servers are being updated with a patch.
Friday, 14 October 2016
DataShop 9.2 released - Alpha version of Workflow tool
The latest release of DataShop introduces an analytic workflow authoring tool. The alpha version of this tool allows users to build and run component-based process models to analyze, manipulate and visualize data.
The workflow authoring tool is part of the community software infrastructure being built under the umbrella of the LearnSphere project, with partners at Stanford, MIT and the University of Memphis. The primary data flow in a workflow is a table so users are not restricted to DataShop data. The platform will provide a way to create custom analyses and interact with proprietary data formats and repositories, such as MOOCdb, DiscourseDB and DataStage.
Users can request early access to the Workflow tool using the "Workflows" link in the left-hand navigation. Once granted access, this becomes a link to the "Manage Workflows" page, which is also available as a main tab on each dataset page.
Workflows are created by dragging and dropping components into the tool and making connections between components. Options can be configured for each component by clicking on the gear icon . For example, in the import component, users can upload a file or choose from a list of dataset files for which they have access.
Once a workflow has been run, clicking on any component's magnifying glass icon or the primary "Results" button will display the output of each component. A preview of the results is also available as a mouse-over on the component output nodes.
In the near future, we will invite users to contribute their own components to the Workflow tool. This feature will allow researchers to share analysis tools, for application to other datasets.
In addition to the Workflow tool, we have added a few enhancements and fixed several bugs:
- The Metrics Report now includes an "Unspecified" category for datasets without a Domain or LearnLab configured. In previous releases these datasets were not reflected in the report, causing the amount of data shown to be less than the actual data.
- KC Model exports are now being cached, allowing for faster exports of models in the same dataset.
- Users running their own DataShop instances will find that Research Goals now include links to recommended datasets and papers on the master server, DataShop@CMU.
- For Dataset Uploads, two restrictions on the upload format have been relaxed. See the Tab-delimited Format Help for details.
- If a Step Name is specified, the Selection-Action-Input is no longer required.
- Previously, if both the Problem View (PV) and Problem Start Time (PST) were specified, then the PV was recomputed based on the PST. With this release, if the two values do not agree, the PV in the upload is used.
- Users are now required to select a Domain/LearnLab designation during dataset upload.
Monday, 10 October 2016
Attention! DataShop downtime for release of v9.2
DataShop is going to be down for 3-4 hours beginning at 7:00am EST on Friday, October 14, 2016 while our servers are being updated for the new release.
Tuesday, 26 April 2016
DataShop 9.1 released
In the spirit of collaboration, this release focuses on integration with our LearnSphere partners, with the long-term goal of creating a community software infrastructure that supports sharing, analysis and collaboration across a wide variety of educational data. Building on DataShop and efforts by partners Stanford, MIT and the University of Memphis, LearnSphere will not only maintain a central store of metadata about what datasets exist, but also have distributed features allowing contributors to control access to their own data. The primary features in support of this collaboration are:
- DataShop now supports both Google and InCommon Federation single sign-on (SSO) options. SSO allows users to access DataShop with the same account they're already using elsewhere, e.g., your university or institution in the case of the InCommon login.
If you currently use the local login option, please contact us about migrating your account to one of the SSO options.
- Users can now upload a DiscourseDB discourse to DataShop. With support for DiscourseDB, users can view meta-data for discourses and, with appropriate access, download the database import file (MySQL dump).
- We have developed a DataShop virtual machine instance (VMI) which allows users to configure their own slave DataShop instance. The remote (slave) instance is a fully-functioning DataShop instance that runs on your server, allowing you to maintain full control over your data, while having your dataset meta-data synced with the production, or master, DataShop instance. If you are interested in having your site host a remote DataShop instance, please contact us.
In addition to the headlining features, this release also adds the following support:
- Users can now create a sample of their dataset by filtering on Custom Fields. Sampling by the name and/or the value of the custom field is supported. This allows users to create subsets of datasets based on particular values assigned to each transaction by the tutor. For example, a step can be categorized as being high- or low-stakes for the student and the tutor can mark the relevant transactions with this information allowing those analyzing the data to filter on this information.
- The Additive Factors Model (AFM) is no longer limited explicitly by the number of skills in a skill model. Previously, AFM would not be run if there more than 300 skills in a model. Now, the number of students and the size of the step roll-up, as well as the number of skills, factor into the decision.
- The file size limit for dataset and file uploads was increased from 200MB to 400MB.
- The number of KC Models in a dataset is now part of the dataset summary on the project page.
- Alignment errors were fixed in the KC Model Export for the case of multiple models with multiple skills.
- Clearing the Project on the Dataset Info page no longer results in a error.
- The Error Report now correctly displays HTML/XML inputs in the Answer and Feedback/Classification columns. Similarly, display errors resulting from inputs that contain mark-up, were fixed in the Exports.
Monday, 25 April 2016
Attention! DataShop downtime for release of v9.1
DataShop is going to be down for 7 hours beginning at 6:00am EST on Tuesday, April 26, 2016 while our servers are being updated for the new release.
Friday, 4 September 2015
DataShop 9.0 released
With the latest release of DataShop, our focus was on fixing bugs and enhancing a few existing features.
- Users can now quickly navigate from problem-specific information in a Learning Curve or Performance Profiler report directly to that problem in the Error Report; an "Error Report" button has been added to the tooltips. The Error Report includes information on the actual values students entered and the feedback received when working on the problem.
- In the Performance Profiler, if a secondary KC model is selected, the skills from the secondary model that are present in the problem are included in the problem info tooltip.
- If the Additive Factors Model (AFM) or Cross Validation (CV) algorithms fail or cannot be run, the reason is now available to the user as a tooltip. The tooltip is present when hovering over the status in the KC Models table. If you have a follow-up questions, remember that you can always send email to datashop-help.
- Users can now sort the skills in a particular KC model to indicate learning difficulty. By sorting the KC model skills by intercept and then tagging those for which the slope is below some threshold, users can easily identify skills that may be misspecified and should be split into multiple skills. See the DataShop Tutorial videos on how to change the skills and test the result of that change. This sorting feature is available on the "Model Values" tab of the Learning Curve page.
- The Cross Validation calculation was modified to provide more statistically valid results. The new calculation computes an average over 20 runs in determining the root mean squared error (RMSE).
- The Student-Step Export was updated to print only a single predicted-error-rate value for steps with multiple skills, as the values are always the same.
- The Help pages for the Additive Factors Modeling (AFM) have been updated to indicate that DataShop implements a compensatory sum across all Knowledge Components when there are multiple KCs assigned to a single step.
- The KC Model Import was fixed to ensure that invalid characters cannot be used in the model name not only during initial model import, but also in the dialog box that comes up when a duplicate name is detected.
Wednesday, 2 September 2015
Attention! DataShop downtime for release of v9.0
DataShop is going to be down for 2 hours beginning at 6:00am EST on Friday, September 4, 2015 while our servers are being updated for the new release.
Friday, 29 May 2015
DataShop 8.2 released - several enhancements and bug fixes
With the latest release of DataShop, our focus was on fixing bugs and enhancing several existing features.
- In order to easily see which skills are associated with a problem or step, the Performance Profiler tooltips now include the relevant KCs.
Simlarly, the KC information is now included in the Problem point info tooltips on the Learning Curve page.
- Web Services was extended to allow users to import new KC models. This functionality, already available via the UI, lets web service users add new KC models to a dataset, mapping steps to skills.
- To make it easier for researchers to identify a student's last attempt at a step, we have added an 'Is Last Attempt' column to the transaction export. This boolean value is true (1) for the transaction with the maximum 'Attempt At Step' and 'Problem View' for a student and step. This can be useful for grading purposes.
- In order to be more flexible, Custom Fields no longer specify a data type; the values in a Custom Field can be mixed, supporting date, number and string formats in a single Custom Field.
Custom Field string values of up to 65,000 characters are now supported.
- KC Models are now grouped by the number of observations with skills (KCs). Comparisons of models only make sense for those with the same number of observations; the models are now grouped by 'Observations with KCs' before being sorted by the user-specified column.
- Previously, uploaded datasets could not be deleted if they had been accessed by any other user. With this release, if a Project Admin wishes to delete a dataset that they uploaded they will be prompted with a list of users that have accessed the dataset which requires confirmation.
- We relaxed an import restriction requiring all transactions to have a value for all dataset levels defined in the import. This is not required for logged datasets so we have removed the restriction for uploaded datasets.
- A performance bottleneck in the dataset upload path was removed.
- The Additive Factors Modeling (AFM) code now correctly interprets the Outcome column when the value is "Unknown".
- Fixed a bug where exporting transactions for a sample with a long string in the Custom Field value caused an error.
- Sample to Dataset now limits permission to create a dataset from a sample to DataShop and Project Admins.
Tuesday, 26 May 2015
Attention! DataShop downtime for release of v8.2
DataShop is going to be down for 16 hours beginning at 6:00pm EST on Thursday, May 28, 2015 while our servers are being updated for the new release.