diff --git a/.github/workflows/jest.yml b/.github/workflows/jest.yml new file mode 100644 index 0000000000..753c862688 --- /dev/null +++ b/.github/workflows/jest.yml @@ -0,0 +1,20 @@ +# This workflow will do a clean install of node dependencies, build the source code and run tests across different versions of node +# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions + +name: Jest Test + +on: [ push, pull_request ] + +jobs: + build: + + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v2 + - name: Use Node.js + uses: actions/setup-node@v1 + with: + node-version: '14.x' + - run: npm ci + - run: npm test diff --git a/.travis.yml b/.travis.yml deleted file mode 100644 index ab5bda822e..0000000000 --- a/.travis.yml +++ /dev/null @@ -1,3 +0,0 @@ -language: node_js -node_js: - - "8.1" diff --git a/README.md b/README.md index 5b1c591322..7989832cdc 100644 --- a/README.md +++ b/README.md @@ -1,26 +1,41 @@ ![logo](http://www.jspsych.org/img/jspsych-logo.jpg) -jsPsych is a JavaScript library for creating behavioral experiments that run in a web browser. jsPsych provides a framework for defining experiments using a set of flexible plugins that create different kinds of tasks a subject could complete during an experiment. By assembling these different plugins together it is possible to create many different types of experiments. +jsPsych is a JavaScript library for creating behavioral experiments that run in a web browser. It provides a framework for defining experiments using a set of flexible plugins that create different kinds of events, and collect different kinds of data. By assembling these plugins together, it is possible to create a wide range of online experiments. + +jsPsych experiments are created using the languages of the Web: HTML, CSS, and JavaScript. JavaScript is the programming language used by web browsers. It provides the most control and flexibility for creating web-based experiments, and allows for easy integration with other JavaScript libraries and server-side tools. Don't have JavaScript experience? Don't worry! jsPsych was designed to make creating online experiments as easy as possible for people without web development experience. + +## What can I do with jsPsych? + +jsPsych comes with a number of plugins that you can use create tasks and collect data. Some plugins do general things, like present a stimulus (text, image, audio, video) and record a key press or button response along with a response time. Other plugins do more specific things, like show a set of instructions pages, run a drag-and-drop image sorting task, present a Random-Dot Kinematogram, or calibrate the WebGazer eye-tracking extension. See the documentation website for a [list of all plugins](https://www.jspsych.org/plugins/list-of-plugins/), and to see what each plugin can do. + +Often people can create their experiment by combining these plugins together. But if that's not possible for your experiment, you can also modify a plugin file or [create your own plugin](https://www.jspsych.org/overview/plugins/#creating-a-new-plugin). This gives you the flexibility to do exactly what you want, while still taking advantage of jsPsych's general experiment-building framework. + +Getting started +--------------- + +New to jsPsych? A good place to start is the basic [Hello World tutorial](https://www.jspsych.org/tutorials/hello-world/) on the jsPsych website. The [Reaction Time Task tutorial](https://www.jspsych.org/tutorials/rt-task/) is a great next step, since it covers many core topics and features. + +There are also a number of [video tutorials](https://www.jspsych.org/tutorials/video-tutorials), including [Session 1 of the Moving Online Workshop](https://www.youtube.com/watch?v=BuhfsIFRFe8), which provides an overview of jsPsych suitable for brand new users. Examples ---------- -Several example experiments and plugin demonstrations are available in the `/examples` folder. +Several example experiments and plugin demonstrations are available in the `/examples` folder. After you've downloaded the [latest release](https://github.com/jspsych/jsPsych/releases), double-click on an example HTML file to run it in your web browser, and open it with a programming-friendly text editor to see how it works. Documentation ------------- -Documentation is available at [docs.jspsych.org](http://docs.jspsych.org). +Documentation is available at [jspsych.org](https://www.jspsych.org/). Need help? ---------- -For questions about using the library, please use the [Discussions forum](https://github.com/jspsych/jsPsych/discussions). +For questions about using the library, please use the GitHub [Discussions forum](https://github.com/jspsych/jsPsych/discussions). Contributing ------------ -Contributions to the code are welcome. Please use the [Issue tracker system](https://github.com/jodeleeuw/jsPsych/issues) to report bugs or discuss suggestions for new features and improvements. If you would like to contribute code, [submit a Pull request](https://help.github.com/articles/using-pull-requests). +Contributions to the code are welcome. Please use the [Issue tracker system](https://github.com/jspsych/jsPsych/issues) to report bugs or discuss suggestions for new features and improvements. If you would like to contribute code, [submit a Pull request](https://help.github.com/articles/using-pull-requests). See the [Contributing to jsPsych](https://www.jspsych.org/about/contributing/) documentation page for more information. Citation -------- @@ -45,4 +60,4 @@ Credits jsPsych was created by Josh de Leeuw ([@jodeleeuw](https://github.com/jodeleeuw)). -There have been many [contributors](https://github.com/jodeleeuw/jsPsych/blob/master/contributors.md) to the library. Thank you! +We're grateful for the many [contributors](https://github.com/jspsych/jsPsych/blob/master/contributors.md) to the library, and for the generous support from a [Mozilla Open Source Support (MOSS)](https://www.mozilla.org/en-US/moss/) award. Thank you! \ No newline at end of file diff --git a/contributors.md b/contributors.md index 9b5ad9b350..fe53d59517 100644 --- a/contributors.md +++ b/contributors.md @@ -1,19 +1,62 @@ The following people have contributed to the development of jsPsych by writing code, documentation, and/or suggesting improvements (in alphabetical order): +* alisdt - https://github.com/alisdt +* Antonia - https://github.com/Ahoidal +* aucuparia - https://github.com/aucuparia * Xiaolu Bai - https://github.com/lbai001 +* bjoluc - https://github.com/bjoluc +* Christian Brickhouse - https://github.com/chrisbrickhouse +* Teon L Brooks - https://github.com/teonbrooks +* Eamon Caddigan - https://github.com/eamoncaddigan * Jason Carpenter * Steve Chao - https://github.com/stchao -* Krisitn Diep - https://github.com/kristiyip +* Zhanwen "Phil" Chen - https://github.com/zhanwenchen +* cthorey - https://github.com/cthorey +* Guy Davidson - https://github.com/guydav +* Kristin Diep - https://github.com/kristiyip +* Ari Dyckovsky - https://github.com/aridyckovsky +* Etienne Gaudrain - https://github.com/egaudrain +* Jon Gauthier - https://github.com/hans +* Robert Gibboni - https://github.com/r-b-g-b * Becky Gilbert - https://github.com/becky-gilbert +* Mark Gorenstein - https://github.com/mgorenstein +* Rui Han - https://github.com/hrcn +* Andy Heusser - https://github.com/andrewheusser +* Angus Hughes - https://github.com/awhug +* Gustavo Juantorena - https://github.com/GEJ1 +* Chris Jungerius - https://github.com/cjungerius +* George Kachergis - https://github.com/kachergis +* Yul Kang - https://github.com/yulkang +* Spencer King - https://github.com/spencerking * Jana Klaus - https://github.com/janakl4us +* Arnold Kochari - https://github.com/akochari +* Peter Jes Kohler - https://github.com/pjkohler +* kupiqu - https://github.com/kupiqu +* Daiichiro Kuroki - https://github.com/kurokida * Jonas Lambers +* madebyafox - https://github.com/madebyafox * Shane Martin - https://github.com/shamrt +* Vijay Marupudi - https://github.com/vijaymarupudi * Adrian Oesch - https://github.com/adrianoesch +* Benjamin Ooghe-Tabanou - https://github.com/boogheta +* Nikolay B Petrov - https://github.com/nikbpetrov +* Dillon Plunkett - https://github.com/dillonplunkett * Junyan Qi - https://github.com/GavinQ1 * Sivananda Rajananda - https://github.com/vrsivananda * Dan Rivas - https://github.com/rivasd +* Werner Sævland - https://github.com/wernersa * Marian Sauter - https://github.com/mariansauter +* Ellen Shapiro - https://github.com/designatednerd +* Jan Simson - https://github.com/jansim +* Hannah Small - https://github.com/hesmall +* sprengholz - https://github.com/sprengholz +* Dominik Strohmeier - https://github.com/onkeltom +* Nabeel Sulieman - https://github.com/nabsul +* Hitoshi Tominaga - https://github.com/tbrotherm * Tim Vergenz - https://github.com/vergenzt * Matteo Visconti di Oleggio Castello - https://github.com/mvdoc +* Ilya Vorontsov - https://github.com/VorontsovIE * Wolfgang Walther - https://github.com/wolfgangwalther * Erik Weitnauer - https://github.com/eweitnauer * Rob Wilkinson - https://github.com/RobAWilkinson +* Andy Woods - https://github.com/andytwoods +* Reto Wyss - https://github.com/retowyss \ No newline at end of file diff --git a/css/jspsych.css b/css/jspsych.css index 1e53897a63..9a07da4d84 100644 --- a/css/jspsych.css +++ b/css/jspsych.css @@ -76,9 +76,17 @@ border-color: #ccc; } -.jspsych-btn:hover { +/* only apply the hover style on devices with a mouse/pointer that can hover - issue #977 */ +@media (hover: hover) { + .jspsych-btn:hover { + background-color: #ddd; + border-color: #aaa; + } +} + +.jspsych-btn:active { background-color: #ddd; - border-color: #aaa; + border-color:#000000; } .jspsych-btn:disabled { @@ -88,6 +96,79 @@ cursor: not-allowed; } +/* custom style for input[type="range] (slider) to improve alignment between positions and labels */ + +.jspsych-slider { + appearance: none; + -webkit-appearance: none; + -moz-appearance: none; + width: 100%; + background: transparent; +} +.jspsych-slider:focus { + outline: none; +} +/* track */ +.jspsych-slider::-webkit-slider-runnable-track { + appearance: none; + -webkit-appearance: none; + width: 100%; + height: 8px; + cursor: pointer; + background: #eee; + box-shadow: 0px 0px 0px #000000, 0px 0px 0px #0d0d0d; + border-radius: 2px; + border: 1px solid #aaa; +} +.jspsych-slider::-moz-range-track { + appearance: none; + width: 100%; + height: 8px; + cursor: pointer; + background: #eee; + box-shadow: 0px 0px 0px #000000, 0px 0px 0px #0d0d0d; + border-radius: 2px; + border: 1px solid #aaa; +} +.jspsych-slider::-ms-track { + appearance: none; + width: 99%; + height: 14px; + cursor: pointer; + background: #eee; + box-shadow: 0px 0px 0px #000000, 0px 0px 0px #0d0d0d; + border-radius: 2px; + border: 1px solid #aaa; +} +/* thumb */ +.jspsych-slider::-webkit-slider-thumb { + border: 1px solid #666; + height: 24px; + width: 15px; + border-radius: 5px; + background: #ffffff; + cursor: pointer; + -webkit-appearance: none; + margin-top: -9px; +} +.jspsych-slider::-moz-range-thumb { + border: 1px solid #666; + height: 24px; + width: 15px; + border-radius: 5px; + background: #ffffff; + cursor: pointer; +} +.jspsych-slider::-ms-thumb { + border: 1px solid #666; + height: 20px; + width: 15px; + border-radius: 5px; + background: #ffffff; + cursor: pointer; + margin-top: -2px; +} + /* jsPsych progress bar */ #jspsych-progressbar-container { diff --git a/docs/about/contributing.md b/docs/about/contributing.md index bfd8f9a6ed..bb1e830795 100644 --- a/docs/about/contributing.md +++ b/docs/about/contributing.md @@ -6,21 +6,21 @@ Contributions to jsPsych are welcome! All of the code is managed through the Git #### Discuss the proposed change -If you have a specific modification in mind, open a [new issue via GitHub](https://github.com/jspsych/jsPsych/issues/new). Describe the proposed change and what problem it solves. If you are interested in adding a new plugin to the library, it helps if you post an example of the plugin in use and describe the different use cases of the plugin. +If you have a specific modification in mind -- for instance, a new feature or bug fix -- please open a [new issue via GitHub](https://github.com/jspsych/jsPsych/issues/new). Describe the proposed change and what functionality it adds to the library and/or what problem it solves. If you are interested in adding a new plugin to the library, it helps if you post an example of the plugin in use and describe the different use cases of the plugin (for more guidance, see the "Writing new plugins" section below). -If the modification you are interested in working on is not quite at the point where you have a specific modification to the code base in mind, then it might be helpful to discuss the issue first on the [jsPsych Google group](https://groups.google.com/forum/#!forum/jspsych). +If you are thinking about proposing a change but not at the point where you have a specific modification to the code base in mind, then it might be helpful to discuss the issue first on [GitHub Discussions](https://github.com/jspsych/jsPsych/discussions). Discussion posts can be useful for sharing code and getting feedback before requesting a change to the library. #### Fork the library and modify the code -To make changes to the code, you should fork the jsPsych library via GitHub. Changes should be targeted at the `master` branch. +To make changes to the code, you should fork the jsPsych library via GitHub and make modifications on your fork. You may find it useful to make modifications on branches, so that you can keep your proposed changes separate from any other unrelated changes you might want to make on your fork. #### Submit a pull request -Once your modification is complete, submit a pull request to merge your changes into the main repository. Pull requests will be reviewed by the project owner. +Once your modification is complete, submit a pull request to merge your changes into the `master` branch of the main repository. Pull requests will be reviewed by the project team. ## Writing new plugins -New plugins are welcome additions to the library. Plugins can be distributed independently of the main library or added to the GitHub repository via a pull request and the process described above. If you want to add your plugin to the main library then there are a few guidelines to follow. +New plugins are welcome additions to the library. Plugins can be distributed independently of the main library or added to the GitHub repository via a pull request, following the process described above. If you want to add your plugin to the main library then there are a few guidelines to follow. #### Make the plugin as general as possible @@ -28,7 +28,7 @@ Plugins are most useful when they are flexible. Avoid fixing the value of parame #### Use the jsPsych.pluginAPI module when appropriate -The pluginAPI module contains functions relevant to plugin development. Avoid duplicating the functions defined within the library in your plugin. If you have a suggestion for improving pluginAPI methods, then go ahead and submit a pull request to modify it directly. +The [pluginAPI module](../core_library/jspsych-pluginAPI.md) contains functions relevant to plugin development. Avoid duplicating the functions defined within the library in your plugin, and instead use the pluginAPI whenever possible. If you have a suggestion for improving pluginAPI methods, then go ahead and submit a pull request to modify it directly. #### Document your plugin @@ -36,7 +36,7 @@ When submitting a pull request to add your plugin, make sure to include a docume #### Include an example file -Write a short example file to include in the `examples` directory. This should demonstrate the basic use cases of the plugin as clearly as possible. +Write a short example HTML file to include in the `examples` directory. This should demonstrate the basic use cases of the plugin as clearly as possible. #### Include a testing file diff --git a/docs/about/support.md b/docs/about/support.md index 2f33a152f5..9a284e510e 100644 --- a/docs/about/support.md +++ b/docs/about/support.md @@ -1,7 +1,7 @@ # Support -For questions about jsPsych the preferred method of support is the [jsPsych Google group](https://groups.google.com/forum/#!forum/jspsych). Questions are most likely to be answered when they include a reproducible example of the problem. If you can make your code available online and link to the experiment, that will make the question easier to answer. +For questions about jsPsych the preferred method of support is via [GitHub Discussions](https://github.com/jspsych/jsPsych/discussions). Questions are most likely to be answered when they include a reproducible example of the problem. If you can make your code available online and link to the experiment, that will make the question easier to answer. -If you have identified a problem with jsPsych, such as a bug in the code or an error in the documentation, please [open a new issue](https://github.com/jodeleeuw/jsPsych/issues) on the GitHub site. +If you have identified a problem with jsPsych, such as a bug in the code or an error in the documentation, please [open a new issue](https://github.com/jspsych/jsPsych/issues/new) on the GitHub site. And if you have a suggestion for fixing the problem, feel free to propose a modification by following the steps in the [Contribuitng to jsPsych](contributing.md) page. Inquiries for paid consultation to develop experiments using jsPsych or to create new custom jsPsych features can be sent to [josh.deleeuw@gmail.com](mailto:josh.deleeuw@gmail.com). diff --git a/docs/core_library/jspsych-core.md b/docs/core_library/jspsych-core.md index 4d7b4558ee..c6a1892b26 100644 --- a/docs/core_library/jspsych-core.md +++ b/docs/core_library/jspsych-core.md @@ -2,16 +2,16 @@ --- ## jsPsych.addNodeToEndOfTimeline -``` -jsPsych.addNodeToEndOfTimeline(node_parameters, callback) + +```javascript +jsPsych.addNodeToEndOfTimeline(node_parameters) ``` ### Parameters -Parameter | Type | Description ---------- | ---- | ----------- -node_parameters | object | An object defining a timeline. It must have, at a minimum, a `timeline` parameter with a valid timeline array as the value for that parameter. -callback | function | An optional callback function. If adding the node to the timeline requires any preloading of media assets, this callback will be triggered after preloading is compelte. +| Parameter | Type | Description | +| --------------- | -------- | ---------------------------------------- | +| node_parameters | object | An object defining a timeline. It must have, at a minimum, a `timeline` parameter with a valid timeline array as the value for that parameter. | ### Return value @@ -21,9 +21,7 @@ None. Adds the timeline to the end of the experiment. -### Examples - -#### Without callback +### Example ```javascript var trial = { @@ -38,20 +36,35 @@ var new_timeline = { jsPsych.addNodeToEndOfTimeline(new_timeline) ``` -### With callback +--- +## jsPsych.allTimelineVariables + +```javascript +jsPsych.allTimelineVariables() +``` + +### Parameters + +None. + +### Return value + +Returns an object with all available timeline variables at this moment in the experiment, represented as `key: value` pairs. + +### Description + +This function can be used to get all the timeline variables at a particular moment in the experiment. Can be useful for annotating +data, such as in the example below. + +### Example ```javascript -var first = { +var trial = { type: 'html-keyboard-response', - stimulus: 'first trial; new trial added when on_finish is called', - on_finish: function(){ - jsPsych.pauseExperiment(); - jsPsych.addNodeToEndOfTimeline({ - timeline: [{ - type: 'image-keyboard-response', - stimulus: 'img/happy_face_4.jpg' - }] - }, jsPsych.resumeExperiment) + stimulus: 'Just a demo', + on_finish: function(data){ + // merge all timeline variables available at this trial into the data for this trial + Object.assign(data, jsPsych.allTimelineVariables()) } } ``` @@ -59,7 +72,7 @@ var first = { --- ## jsPsych.currentTimelineNodeID -``` +```javascript jsPsych.currentTimelineNodeID() ``` @@ -99,14 +112,13 @@ The rules about iterations apply throughout the hierarchical ID: ```javascript var id = jsPsych.currentTimelineNodeID(); - console.log('The current TimelineNode ID is '+id); ``` --- ## jsPsych.currentTrial -``` +```javascript jsPsych.currentTrial() ``` @@ -125,16 +137,15 @@ Get a description of the current trial ### Example ```javascript - var trial = jsPsych.currentTrial(); - console.log('The current trial is using the '+trial.type+' plugin'); ``` + --- ## jsPsych.endCurrentTimeline -``` -jsPsych.endCurrentTimeline +```javascript +jsPsych.endCurrentTimeline() ``` ### Parameters @@ -151,10 +162,9 @@ Ends the current timeline. If timelines are nested, then only the timeline that ### Example -#### Loop indefinitely until a particular key is pressed +#### End timeline if a particular key is pressed ```javascript - var images = [ "img/1.gif", "img/2.gif", "img/3.gif", "img/4.gif", "img/5.gif", "img/6.gif", "img/7.gif", "img/8.gif", @@ -170,10 +180,10 @@ for (var i = 0; i < images.length; i++) { var block = { type: 'image-keyboard-response', - choices: [89, 78], // Y or N - prompt: '

Press Y to Continue. Press N to end this node of the experiment.

', + choices: ['y', 'n'], + prompt: '

Press "y" to Continue. Press "n" to end this node of the experiment.

', on_finish: function(data) { - if (data.key_press == 78) { + if (jsPsych.pluginAPI.compareKeys(data.response, 'n')) { jsPsych.endCurrentTimeline(); } }, @@ -182,8 +192,7 @@ var block = { var after_block = { type: 'html-keyboard-response', - stimulus: '

The next node

', - is_html: true + stimulus: '

The next node

' } jsPsych.init({ @@ -192,21 +201,20 @@ jsPsych.init({ jsPsych.data.displayData(); } }); - ``` --- ## jsPsych.endExperiment -``` +```javascript jsPsych.endExperiment(end_message) ``` ### Parameters -Parameter | Type | Description ---------- | ---- | ----------- -end_message | string | A message to display on the screen after the experiment is over. +| Parameter | Type | Description | +| ----------- | ------ | ---------------------------------------- | +| end_message | string | A message to display on the screen after the experiment is over. | ### Return value @@ -224,11 +232,11 @@ Ends the experiment, skipping all remaining trials. var trial = { type: 'image-keyboard-response', stimulus: 'image1.jpg', - choices: [89,78], // Y or N - prompt: '

Press Y to Continue. Press N to end the experiment

', + choices: ['y', 'n'] + prompt: '

Press "y" to Continue. Press "n" to end the experiment

', on_finish: function(data){ - if(data.key_press == 78){ - jsPsych.endExperiment('The experiment was ended by pressing N.'); + if(jsPsych.pluginAPI.compareKeys(data.response, "n")){ + jsPsych.endExperiment('The experiment was ended by pressing "n".'); } } } @@ -237,15 +245,15 @@ var trial = { --- ## jsPsych.finishTrial -``` +```javascript jsPsych.finishTrial(data) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -data | object | The data to store for the trial. +| Parameter | Type | Description | +| --------- | ------ | -------------------------------- | +| data | object | The data to store for the trial. | ### Return value @@ -266,15 +274,14 @@ This method tells jsPsych that the current trial is over. It is used in all of t ### Example ```javascript - // this code would be in a plugin jsPsych.finishTrial({correct_response: true}); - ``` + --- ## jsPsych.getDisplayElement -``` +```javascript jsPsych.getDisplayElement() ``` @@ -302,7 +309,7 @@ el.style.visibility = 'hidden'; --- ## jsPsych.getProgressBarCompleted -``` +```javascript jsPsych.getProgressBarCompleted() ``` @@ -327,49 +334,47 @@ var progress_bar_amount = jsPsych.getProgressBarCompleted(); --- ## jsPsych.init -``` +```javascript jsPsych.init(settings) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -settings | object | The settings object for initializing jsPsych. See table below. +| Parameter | Type | Description | +| --------- | ------ | ---------------------------------------- | +| settings | object | The settings object for initializing jsPsych. See table below. | The settings object can contain several parameters. The only *required* parameter is `timeline`. -Parameter | Type | Description ---------- | ---- | ----------- -timeline | array | An array containing the objects that describe the experiment timeline. See [Creating an Experiment: The Timeline](../overview/timeline.md). -display_element | string | The ID of an HTML element to display the experiment in. If left blank, jsPsych will use the `` element to display content. All keyboard event listeners are bound to this element. In order for a keyboard event to be detected, this element must have focus (be the last thing that the subject clicked on). -on_finish | function | Function to execute when the experiment ends. -on_trial_start | function | Function to execute when a new trial begins. -on_trial_finish | function | Function to execute when a trial ends. -on_data_update | function | Function to execute every time data is stored using the `jsPsych.data.write` method. All plugins use this method to save data (via a call to `jsPsych.finishTrial`, so this function runs every time a plugin stores new data. -on_interaction_data_update | function | Function to execute every time a new interaction event occurs. Interaction events include clicking on a different window (blur), returning to the experiment window (focus), entering full screen mode (fullscreenenter), and exiting full screen mode (fullscreenexit). -on_close | function | Function to execute when the user leaves the page. Can be used, for example, to save data before the page is closed. -exclusions | object | Specifies restrictions on the browser the subject can use to complete the experiment. See list of options below. -show_progress_bar | boolean | If true, then [a progress bar](../overview/progress-bar.md) is shown at the top of the page. -message_progress_bar | string | Message to display next to the progress bar. The default is 'Completion Progress'. -auto_update_progress_bar | boolean | If true, then the progress bar at the top of the page will automatically update as every top-level timeline or trial is completed. -show_preload_progress_bar | boolean | If true, then a progress bar is displayed while media files are automatically preloaded. -preload_audio | array | An array of audio files to preload before starting the experiment. -preload_images | array | An array of image files to preload before starting the experiment. -preload_video | array | An array of video files to preload before starting the experiment. -max_load_time | numeric | The maximum number of milliseconds to wait for content to preload. If the wait time is exceeded an error message is displayed and the experiment stops. The default value is 60 seconds. -max_preload_attempts | numeric | The maximum number of attempts to preload each file in case of an error. The default value is 10. There is a small delay of 200ms between each attempt. -use_webaudio | boolean | If false, then jsPsych will not attempt to use the WebAudio API for audio playback. Instead, HTML5 Audio objects will be used. The WebAudio API offers more precise control over the timing of audio events, and should be used when possible. The default value is true. -default_iti | numeric | The default inter-trial interval in ms. The default value if none is specified is 0ms. -experiment_width | numeric | The desired width of the jsPsych container in pixels. If left undefined, the width will be 100% of the display element. Usually this is the `` element, and the width will be 100% of the screen size. +| Parameter | Type | Description | +| -------------------------- | -------- | ---------------------------------------- | +| timeline | array | An array containing the objects that describe the experiment timeline. See [Creating an Experiment: The Timeline](../overview/timeline.md). | +| display_element | string | The ID of an HTML element to display the experiment in. If left blank, jsPsych will use the `` element to display content. All keyboard event listeners are bound to this element. In order for a keyboard event to be detected, this element must have focus (be the last thing that the subject clicked on). | +| on_finish | function | Function to execute when the experiment ends. | +| on_trial_start | function | Function to execute when a new trial begins. | +| on_trial_finish | function | Function to execute when a trial ends. | +| on_data_update | function | Function to execute every time data is stored using the `jsPsych.data.write` method. All plugins use this method to save data (via a call to `jsPsych.finishTrial`, so this function runs every time a plugin stores new data. | +| on_interaction_data_update | function | Function to execute every time a new interaction event occurs. Interaction events include clicking on a different window (blur), returning to the experiment window (focus), entering full screen mode (fullscreenenter), and exiting full screen mode (fullscreenexit). | +| on_close | function | Function to execute when the user leaves the page. Can be used, for example, to save data before the page is closed. | +| exclusions | object | Specifies restrictions on the browser the subject can use to complete the experiment. See list of options below. | +| show_progress_bar | boolean | If true, then [a progress bar](../overview/progress-bar.md) is shown at the top of the page. | +| message_progress_bar | string | Message to display next to the progress bar. The default is 'Completion Progress'. | +| auto_update_progress_bar | boolean | If true, then the progress bar at the top of the page will automatically update as every top-level timeline or trial is completed. | +| use_webaudio | boolean | If false, then jsPsych will not attempt to use the WebAudio API for audio playback. Instead, HTML5 Audio objects will be used. The WebAudio API offers more precise control over the timing of audio events, and should be used when possible. The default value is true. | +| default_iti | numeric | The default inter-trial interval in ms. The default value if none is specified is 0ms. | +| experiment_width | numeric | The desired width of the jsPsych container in pixels. If left undefined, the width will be 100% of the display element. Usually this is the `` element, and the width will be 100% of the screen size. | +| minimum_valid_rt | numeric | The minimum valid response time for key presses during the experiment. Any key press response time that is less than this value will be treated as invalid and ignored. Note that this parameter only applies to _keyboard responses_, and not to other response types such as buttons and sliders. The default value is 0. | +| override_safe_mode | boolean | Running a jsPsych experiment directly in a web browser (e.g., by double clicking on a local HTML file) will load the page using the `file://` protocol. Some features of jsPsych don't work with this protocol. By default, when jsPsych detects that it's running on a page loaded via the `file://` protocol, it runs in _safe mode_, which automatically disables features that don't work in this context. Specifically, the use of Web Audio is disabled (audio will be played using HTML5 audio instead, even if `use_webaudio` is `true`) and video preloading is disabled. The `override_safe_mode` parameter defaults to `false`, but you can set it to `true` to force these features to operate under the `file://` protocol. In order for this to work, you will need to disable web security (CORS) features in your browser - this is safe to do if you know what you are doing. Note that this parameter has no effect when you are running the experiment on a web server, because the page will be loaded via the `http://` or `https://` protocol. | +| case_sensitive_responses | boolean | If true, then jsPsych will make a distinction between uppercase and lowercase keys when evaluating keyboard responses, e.g. "A" (uppercase) will not be recognized as a valid response if the trial only accepts "a" (lowercase). If false, then jsPsych will not make a distinction between uppercase and lowercase keyboard responses, e.g. both "a" and "A" responses will be valid when the trial's key choice parameter is "a". Setting this parameter to false is useful if you want key responses to be treated the same way when CapsLock is turned on or the Shift key is held down. The default value is false. | +extensions | array | Array containing information about one or more jsPsych extensions that are used during the experiment. Each extension should be specified as an object with `type` (required), which is the name of the extension, and `params` (optional), which is an object containing any parameter-value pairs to be passed to the extension's `initialize` function. Default value is an empty array. | Possible values for the exclusions parameter above. -Parameter | Type | Description ---------- | ---- | ----------- -min_width | numeric | The minimum width of the browser window. If the width is below this value, a message will be displayed to the subject asking them to maximize their browser window. The experiment will sit on this page until the browser window is large enough. -min_height | numeric | Same as above, but with height. -audio | boolean | Set to true to require support for the WebAudio API (used by plugins that play audio files). +| Parameter | Type | Description | +| ---------- | ------- | ---------------------------------------- | +| min_width | numeric | The minimum width of the browser window. If the width is below this value, a message will be displayed to the subject asking them to maximize their browser window. The experiment will sit on this page until the browser window is large enough. | +| min_height | numeric | Same as above, but with height. | +| audio | boolean | Set to true to require support for the WebAudio API (used by plugins that play audio files). | ### Return value @@ -384,9 +389,10 @@ This method configures and starts the experiment. See any of the plugin examples in the [examples folder](https://github.com/jodeleeuw/jsPsych/tree/master/examples) in the GitHub repository. --- + ## jsPsych.initSettings -``` +```javascript jsPsych.initSettings() ``` @@ -412,8 +418,10 @@ console.log(JSON.stringify(settings.timeline)); ``` --- + ## jsPsych.pauseExperiment -``` + +```javascript jsPsych.pauseExperiment() ``` @@ -437,7 +445,7 @@ var trial = { stimulus: 'Press p to take a 30 second break. Otherwise, press c to continue immediately.', choices: ['p','c'], on_finish: function(data){ - if(data.key_press == 80) { // 80 = p + if(jsPsych.pluginAPI.compareKeys(data.response, "p")) { jsPsych.pauseExperiment(); setTimeout(jsPsych.resumeExperiment, 30000); } @@ -446,9 +454,10 @@ var trial = { ``` --- + ## jsPsych.progress -``` +```javascript jsPsych.progress() ``` @@ -460,11 +469,11 @@ None. Returns an object with the following properties: -Property | Type | Description -----------|------|------------ -total_trials | numeric | Indicates the number of trials in the experiment. Note that this does not count possible loops or skipped trials due to conditional statements. -current_trial_global | numeric | Returns the trial index of the current trial in a global scope. Every trial will increase this count by 1. -percent_complete | numeric | Estimates the percent of the experiment that is complete. Works as expected for experiments without conditional or looping timelines. For complex timelines, the percent is an approximation. +| Property | Type | Description | +| -------------------- | ------- | ---------------------------------------- | +| total_trials | numeric | Indicates the number of trials in the experiment. Note that this does not count possible loops or skipped trials due to conditional statements. | +| current_trial_global | numeric | Returns the trial index of the current trial in a global scope. Every trial will increase this count by 1. | +| percent_complete | numeric | Estimates the percent of the experiment that is complete. Works as expected for experiments without conditional or looping timelines. For complex timelines, the percent is an approximation. | ### Description @@ -474,15 +483,15 @@ This method returns information about the length of the experiment and the subje ### Example ```javascript - var progress = jsPsych.progress(); - alert('You have completed approximately '+progress.percent_complete+'% of the experiment'); - ``` + --- + ## jsPsych.resumeExperiment -``` + +```javascript jsPsych.resumeExperiment() ``` @@ -506,7 +515,7 @@ var trial = { stimulus: 'Press p to take a 30 second break. Otherwise, press c to continue immediately.', choices: ['p','c'], on_finish: function(data){ - if(data.key_press == 80) { // 80 = p + if(jsPsych.pluginAPI.compareKeys(data.response, "p")) { jsPsych.pauseExperiment(); setTimeout(jsPsych.resumeExperiment, 30000); } @@ -515,17 +524,18 @@ var trial = { ``` --- + ## jsPsych.setProgressBar -``` +```javascript jsPsych.setProgressBar(value) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -value | numeric | Proprotion (between 0 and 1) to fill the progress bar. +| Parameter | Type | Description | +| --------- | ------- | ---------------------------------------- | +| value | numeric | Proprotion (between 0 and 1) to fill the progress bar. | ### Return value @@ -543,9 +553,10 @@ jsPsych.setProgressBar(0.85); ``` --- + ## jsPsych.startTime -``` +```javascript jsPsych.startTime() ``` @@ -568,9 +579,10 @@ var start_time = jsPsych.startTime(); ``` --- + ## jsPsych.timelineVariable -``` +```javascript jsPsych.timelineVariable(variable, call_immediate) ``` @@ -579,11 +591,11 @@ jsPsych.timelineVariable(variable, call_immediate) Parameter | Type | Description ----------|------|------------ variable | string | Name of the timeline variable -call_immediate | bool | Typically this parameter is `false`, or simply ommitted. When `false`, the return value is a function that returns the timeline variable. This makes `jsPsych.timelineVariable` suitable for dynamic parameters by default. If `true` the function returns the value of the timeline variable immediately. +call_immediate | bool | This parameter is optional and can usually be omitted. It determines the return value of `jsPsych.timelineVariable`. If `true`, the function returns the _value_ of the current timeline variable. If `false`, the function returns _a function that returns the value_ of the current timeline variable. When `call_immediate` is omitted, the appropriate option is determined automatically based on the context in which this function is called. When `jsPsych.timelineVariable` is used as a parameter value, `call_immediate` will be `false`. This allows it to be used as a [dynamic trial parameter](/overview/dynamic-parameters). When `jsPsych.timelineVariable` is used inside of a function, `call_immediate` will be `true`. It is possible to explicitly set this option to `true` to force the function to immediately return the current value of the timeline variable. ### Return value -Depends on the value of `call_immediate` parameter. See description above. +Either a function that returns the value of the timeline variable, or the value of the timeline variable, depending on the context in which it is used. See `call_immediate` description above. ### Description @@ -592,6 +604,7 @@ Depends on the value of `call_immediate` parameter. See description above. ### Examples #### Standard use as a parameter for a trial + ```javascript var trial = { type: 'image-keyboard-response', @@ -610,11 +623,12 @@ var procedure = { ``` #### Invoking immediately in a function + ```javascript var trial = { type: 'html-keyboard-response', stimulus: function(){ - return ""; + return ""; } } @@ -629,11 +643,32 @@ var procedure = { } ``` +Prior to jsPsych v6.3.0, the `call_immediate` parameter must be set to `true` when `jsPsych.timelineVariable` is called from within a function, such as a [dynamic parameter](/overview/dynamic-parameters): + +```javascript +var trial = { + type: 'html-keyboard-response', + stimulus: function(){ + return ""; + } +} + +var procedure = { + timeline: [trial], + timeline_variables: [ + {image: 'face1.png'}, + {image: 'face2.png'}, + {image: 'face3.png'}, + {image: 'face4.png'} + ] +} +``` --- + ## jsPsych.totalTime -``` +```javascript jsPsych.totalTime() ``` @@ -652,8 +687,33 @@ Gets the total time the subject has been in the experiment. ### Example ```javascript - var time = jsPsych.totalTime(); console.log(time); +``` + +--- + +## jsPsych.version +```javascript +jsPsych.version() +``` + +### Parameters + +None. + +### Return value + +Returns the version number as a string. + +### Description + +Gets the version of jsPsych. + +### Example + +```javascript +var version = jsPsych.version(); +console.log(version); ``` diff --git a/docs/core_library/jspsych-data.md b/docs/core_library/jspsych-data.md index 4f39040651..f9c7eca548 100644 --- a/docs/core_library/jspsych-data.md +++ b/docs/core_library/jspsych-data.md @@ -3,9 +3,10 @@ The jsPsych.data module contains functions for interacting with the data generated by jsPsych plugins. --- + ## jsPsych.data.addProperties -``` +```javascript jsPsych.data.addProperties(properties) ``` @@ -23,19 +24,19 @@ Returns nothing. This method appends a set of properties to every trial in the data object, including trials that have already occurred and trials that have yet to occur. You can use this to record things like the subject ID or condition assignment. - ### Examples #### Assigning a subject ID and condition code + ```javascript jsPsych.data.addProperties({subject: 1, condition: 'control'}); ``` - --- + ## jsPsych.data.displayData -``` +```javascript jsPsych.data.displayData(format) ``` @@ -56,6 +57,7 @@ Outputs all of the data collected in the experiment to the screen in either JSON ### Examples #### Using the on_finish callback function to show data at the end of the experiment + ```javascript jsPsych.init({ experiment_structure: exp, @@ -66,6 +68,7 @@ jsPsych.init({ ``` --- + ## jsPsych.data.get ``` @@ -97,9 +100,10 @@ console.log(all_data.csv()); ``` --- + ## jsPsych.data.getDataByTimelineNode -``` +```javascript jsPsych.data.getDataByTimelineNode(node_id) ``` @@ -120,17 +124,15 @@ Get all the data generated by a specified Timeline. ### Example ```javascript - var current_node_id = jsPsych.currentTimelineNodeID(); - var data_from_current_node = jsPsych.data.getDataByTimelineNode(current_node_id); - ``` --- + ## jsPsych.data.getInteractionData -``` +```javascript jsPsych.data.getInteractionData() ``` @@ -162,11 +164,11 @@ var interaction_data = jsPsych.data.getInteractionData(); console.log(interaction_data.json()); ``` - --- + ## jsPsych.data.getLastTimelineData -``` +```javascript jsPsych.data.getLastTimelineData() ``` @@ -180,20 +182,21 @@ Gets all of the data generated in the same timeline as the last trial. ### Example -```js +```javascript var lasttimelinedata = jsPsych.data.getLastTimelineData(); ``` --- + ## jsPsych.data.getLastTrialData -``` +```javascript jsPsych.data.getLastTrialData() ``` ### Return value -Returns a DataCollection +Returns a DataCollection. ### Description @@ -201,14 +204,15 @@ Gets the data collection containing all data generated by the last trial. ### Example -``` +```javascript var lasttrialdata = jsPsych.data.getLastTrialData(); ``` --- + ## jsPsych.data.getURLVariable -``` +```javascript jsPsych.data.getURLVariable(var_name) ``` @@ -226,21 +230,19 @@ Returns the value of a variable passed in through the query string. For extracting a particular variable passed in through a URL query string. -### Examples +### Example ```javascript - // if the URL of the page is: experiment.html?subject=1234&condition=test - console.log(jsPsych.data.getURLVariable('subject')) // logs "1234" console.log(jsPsych.data.getURLVariable('condition')) // logs "test" - ``` --- + ## jsPsych.data.urlVariables -``` +```javascript jsPsych.data.urlVariables() ``` @@ -252,23 +254,20 @@ Returns an object (associative array) of the variables in the URL query string. For extracting variables passed in through a URL query string. -### Examples +### Example ```javascript - // if the URL of the page is: experiment.html?subject=1234&condition=test - var urlvar = jsPsych.data.urlVariables(); console.log(urlvar.subject) // logs "1234" console.log(urlvar.condition) // logs "test" - ``` - --- + ## jsPsych.data.write -``` +```javascript jsPsych.data.write(data_object) ``` @@ -286,10 +285,9 @@ Returns nothing. This method is used by `jsPsych.finishTrial` for writing data. You should probably not use it to add data. Instead use [jsPsych.data.addProperties](#addProperties). -### Examples +### Example ```javascript - // don't use this! data should only be written once per trial. use jsPsych.finishTrial to save data. var trial_data = { @@ -298,20 +296,19 @@ var trial_data = { } jsPsych.data.write(trial_data); - ``` --- + ## DataCollection -All data is stored in the DataCollection object. Using methods like `jsPsych.data.get()` and `jsPsych.data.getLastTrialData()` return DataCollections containing -the experiment data. This is a list of all of the methods that are available to call on a DataCollection object. +All data is stored in the DataCollection object. Using methods like `jsPsych.data.get()` and `jsPsych.data.getLastTrialData()` return DataCollections containing the experiment data. This is a list of all of the methods that are available to call on a DataCollection object. #### .addToAll() Adds a set of properties to all items in the DataCollection. Similar to `jsPsych.data.addProperties()`, except that it can be applied to a subset of the whole DataCollection by filtering down to a smaller DataCollection first. -```js +```javascript jsPsych.data.get().addToAll({subject_id: 123, condition: 'control'}); ``` @@ -319,7 +316,7 @@ jsPsych.data.get().addToAll({subject_id: 123, condition: 'control'}); Adds a set of properties to the last trial in the DataCollection. -```js +```javascript jsPsych.data.get().addToLast({success: true}); ``` @@ -327,7 +324,7 @@ jsPsych.data.get().addToLast({success: true}); Counts the number of trials in the DataCollection. -```js +```javascript jsPsych.data.get().count() ``` @@ -335,7 +332,7 @@ jsPsych.data.get().count() Generates a CSV string representing all of the data in the DataCollection. -```js +```javascript console.log(jsPsych.data.get().csv()); ``` @@ -343,27 +340,27 @@ console.log(jsPsych.data.get().csv()); Returns a subset of the DataCollection based on the filter. The filter is an object, and trials are only kept in the returned DataCollection if they contain the key: value pair(s) in the filter object. For example, the code below selects all of the trials with a correct response. -```js +```javascript var correct_trials = jsPsych.data.get().filter({correct: true}); ``` The object can have multiple key: value pairs, and the trials must match all of them in order to be included in the returned collection. -```js +```javascript // keep only correct trials from the practice phase var correct_practice_trials = jsPsych.data.get().filter({correct:true, phase: 'practice'}); ``` The filter can also be an array of objects. In this case each object in the array acts as an OR filter. As long as the trial has all the key: value pairs of one of the objects in the array, it will appear in the returned collection. -```js +```javascript // select trials from block 1 and block 5. var trials = jsPsych.data.get().filter([{block: 1}, {block:5}]); ``` The filter method returns a DataCollection object, so methods can be chained onto a single statement. -```js +```javascript // count the number of correct trials in block 1 var block_1_correct = jsPsych.data.get().filter({block:1, correct:true}).count(); ``` @@ -372,7 +369,7 @@ var block_1_correct = jsPsych.data.get().filter({block:1, correct:true}).count() This method is similar to the `.filter()` method, except that it accepts a function as the filter. The function is passed a single argument, containing the data for a trial. If the function returns `true` the trial is included in the returned DataCollection. -```js +```javascript // count the number of trials with a response time greater than 2000ms. var too_long = jsPsych.data.get().filterCustom(function(trial){ return trial.rt > 2000; @@ -381,9 +378,9 @@ var too_long = jsPsych.data.get().filterCustom(function(trial){ #### .first() / .last() -Returns a DataCollection containing the first/last *n* trials. +Returns a DataCollection containing the first/last *n* trials. If *n* is greater than the number of trials in the DataCollection, then these functions will return an array of length equal to the number of trials. If there are no trials in the DataCollection, then these functions will return an empty array. If the *n* argument is omitted, then the functions will use the default value of 1. If *n* is zero or a negative number, then these functions will throw an error. -```js +```javascript var first_trial = jsPsych.data.get().first(1); var last_trial_with_correct_response = jsPsych.data.get().filter({correct: true}).last(1); var last_10_trials = jsPsych.data.get().last(10); @@ -393,7 +390,7 @@ var last_10_trials = jsPsych.data.get().last(10); Returns a DataCollection with all instances of a particular key removed from the dataset. -```js +```javascript // log a csv file that does not contain the internal_node_id values for each trial console.log(jsPsych.data.get().ignore('internal_node_id').csv()); ``` @@ -402,7 +399,7 @@ console.log(jsPsych.data.get().ignore('internal_node_id').csv()); Appends one DataCollection onto another and returns the combined collection. -```js +```javascript // get a DataCollection with all trials that are either correct or // have a response time greater than 200ms. var dc1 = jsPsych.data.get().filter({correct: true}); @@ -414,7 +411,7 @@ var data = dc1.join(dc2); Generates a JSON string representing all of the data in the DataCollection. -```js +```javascript console.log(jsPsych.data.get().json()); ``` @@ -434,7 +431,7 @@ jsPsych.data.get().localSave('csv','mydata.csv'); Add a new entry to the DataCollection. This method is mostly used internally, and you shouldn't need to call it under normal circumstances. -```js +```javascript var data = {correct: true, rt: 500} jsPsych.data.get().push(data); ``` @@ -443,7 +440,7 @@ jsPsych.data.get().push(data); Creates a copy of the DataCollection so that any modification of the values in the DataCollection will not affect the original. -```js +```javascript // this line edits the rt property of the first trial jsPsych.data.get().first(1).values()[0].rt = 100; @@ -461,7 +458,7 @@ jsPsych.data.get().first(1).values()[0].rt Returns a DataColumn object (see documentation below) of a single property from a DataCollection object. -```js +```javascript var rt_data = jsPsych.data.get().select('rt'); rt_data.mean() ``` @@ -470,7 +467,7 @@ rt_data.mean() Generates an array of all the unique key names in the set of trials contained in the DataCollection. This is especially useful when setting up a relational database (e.g., MySQL) where the column names need to be specified in advance. -```js +```javascript console.log(jsPsych.data.get().uniqueNames()); ``` @@ -478,7 +475,7 @@ console.log(jsPsych.data.get().uniqueNames()); Returns the raw data array associated with the DataCollection. This array is modifiable, so changes to the array and values of objects in the array will change the DataCollection. -```js +```javascript var raw_data = jsPsych.data.get().values(); // was response in first trial correct? @@ -490,6 +487,7 @@ if(raw_data[0].correct){ ``` --- + ## DataColumn DataColumn objects represent all the values of a single property in a DataCollection. They are generated by using the `.select()` method on a DataCollection. Once a DataColumn is generated, the following methods can be used. @@ -498,7 +496,7 @@ DataColumn objects represent all the values of a single property in a DataCollec Checks if all values in the DataColumn return `true` when passed to a function. The function takes a single argument, which represents one value from the DataColumn. -```js +```javascript // check if all the response times in the practice phase were under 1000ms jsPsych.data.get().filter({phase: 'practice'}).select('correct').all(function(x) { return x < 1000; }); ``` @@ -507,7 +505,7 @@ jsPsych.data.get().filter({phase: 'practice'}).select('correct').all(function(x) Counts the number of values in the DataColumn. -```js +```javascript // count how many response times there are jsPsych.data.get().select('rt').count(); ``` @@ -516,7 +514,7 @@ jsPsych.data.get().select('rt').count(); Counts the number of occurrences of each unique value in the DataColumn. Returns this value as an object, where each key is a unique value and the value of each key is the number of occurrences of that key. -```js +```javascript // get frequencies of correct and incorrect responses jsPsych.data.get().select('correct').frequencies(); ``` @@ -525,7 +523,7 @@ jsPsych.data.get().select('correct').frequencies(); Returns the maximum or minimum value in a DataColumn. -```js +```javascript jsPsych.data.get().select('rt').max(); jsPsych.data.get().select('rt').min(); ``` @@ -534,7 +532,7 @@ jsPsych.data.get().select('rt').min(); Returns the average of all the values in a DataColumn. -```js +```javascript jsPsych.data.get().select('rt').mean(); ``` @@ -542,7 +540,7 @@ jsPsych.data.get().select('rt').mean(); Returns the median of all the values in a DataColumn. -```js +```javascript jsPsych.data.get().select('rt').median(); ``` @@ -550,7 +548,7 @@ jsPsych.data.get().select('rt').median(); Returns the standard deviation of the values in a DataColumn. -```js +```javascript jsPsych.data.get().select('rt').sd(); ``` @@ -558,7 +556,7 @@ jsPsych.data.get().select('rt').sd(); Filters the DataColumn to include only values that return `true` when passed through the specified function. -```js +```javascript // below results will be less than 200. jsPsych.data.get().select('rt').subset(function(x){ return x < 200; }).max(); ``` @@ -567,7 +565,7 @@ jsPsych.data.get().select('rt').subset(function(x){ return x < 200; }).max(); Returns the sum of the values in a DataColumn. -```js +```javascript jsPsych.data.get().select('rt').sum(); ``` @@ -575,7 +573,7 @@ jsPsych.data.get().select('rt').sum(); The raw array of values in the DataColumn. -```js +```javascript // note that this is not a function call. jsPsych.data.get().select('rt').values; ``` @@ -584,6 +582,6 @@ jsPsych.data.get().select('rt').values; Returns the variance of the values in a DataColumn. -```js +```javascript jsPsych.data.get().select('rt').variance(); ``` diff --git a/docs/core_library/jspsych-pluginAPI.md b/docs/core_library/jspsych-pluginAPI.md index 805b829406..e50e29ffda 100644 --- a/docs/core_library/jspsych-pluginAPI.md +++ b/docs/core_library/jspsych-pluginAPI.md @@ -3,42 +3,10 @@ The pluginAPI module contains functions that are useful when developing new plugins. --- -## jsPsych.pluginAPI.autoPreload -``` -jsPsych.pluginAPI.autoPreload(timeline, callback) -``` - -### Parameters - -Parameter | Type | Description -----------|------|------------ -timeline | TimelineNode object | A TimelineNode object that contains an arbitrary set of trials. -callback | function | A function to execute when loading is complete - -### Return value - -Returns nothing. - -### Description - -Attempts to preload all image files and audio files that will be used to run the trials on the timeline. Content will only preload from plugins that have used the `registerPreload` method to define the media types of their parameters. - -The callback function executes once all of the files are preloaded. - -This method is used internally by the core jsPsych code. It is not recommended that you call it manually. - -### Examples - -```javascript -// you probably shouldn't use this method -``` - - ---- ## jsPsych.pluginAPI.cancelAllKeyboardResponses -``` +```javascript jsPsych.pluginAPI.cancelAllKeyboardResponses() ``` @@ -54,16 +22,17 @@ Returns nothing. Cancels all currently active keyboard listeners created by `jsPsych.pluginAPI.getKeyboardResponse`. -### Examples +### Example ```javascript jsPsych.pluginAPI.cancelAllKeyboardResponses(); ``` --- + ## jsPsych.pluginAPI.cancelKeyboardResponse -``` +```javascript jsPsych.pluginAPI.cancelKeyboardResponse(listener_id) ``` @@ -81,8 +50,7 @@ Returns nothing. Cancels a specific keyboard listener created by `jsPsych.pluginAPI.getKeyboardResponse`. - -### Examples +### Example ```javascript // create a persistent keyboard listener @@ -99,9 +67,10 @@ jsPsych.pluginAPI.cancelKeyboardResponse(listener_id); ``` --- + ## jsPsych.pluginAPI.clearAllTimeouts -``` +```javascript jsPsych.pluginAPI.clearAllTimeouts() ``` @@ -115,12 +84,13 @@ Returns nothing. ### Description -Clears any pending timeouts that were set using jsPsych.pluginAPI.setTimeout() +Clears any pending timeouts that were set using jsPsych.pluginAPI.setTimeout(). --- + ## jsPsych.pluginAPI.compareKeys -``` +```javascript jsPsych.pluginAPI.compareKeys(key1, key2) ``` @@ -133,120 +103,175 @@ key2 | string or numeric | The representation of a key, either string or keycode ### Return value -Returns true if keycodes or strings refer to the same key, regardless of type. +Returns true if keycodes or strings refer to the same key, regardless of type. Returns false if the keycodes or strings do not match. ### Description -Compares two keys to see if they are the same, ignoring differences in representational type. +Compares two keys to see if they are the same, ignoring differences in representational type, and using the appropriate case sensitivity based on the experiment's settings. + +If `case_sensitive_responses` is set to `false` in `jsPsych.init` (the default), then the string key comparison will not be case-sensitive, e.g., "a" and "A" will match, and this function will return `true`. If `case_sensitive_responses` is set to `true` in `jsPsych.init`, then the string key comparison will not be case-sensitive, e.g., "a" and "A" will not match, and this function will return `false`. + +We recommend using this function to compare keys in all plugin and experiment code, rather than using something like `if (response == 'j')...`. This is because the response key returned by the `jsPsych.pluginAPI.getKeyboardResponse` function will be converted to lowercase when `case_sensitive_responses` is `false`, and it will match the exact key press representation when `case_sensitive_responses` is `true`. Using this `compareKeys` function will ensure that your key comparisons work appropriately based on the experiment's `case_sensitive_responses` setting, and that you do not need to remember to check key responses against different case versions of the comparison key (e.g. `if (response == 'ArrowLeft' || response == 'arrowleft')...`). ### Examples +#### Basic examples + ```javascript +jsPsych.pluginAPI.compareKeys('a', 'A'); +// returns true when case_sensitive_responses is false in jsPsych.init + +jsPsych.pluginAPI.compareKeys('a', 'A'); +// returns false when case_sensitive_responses is true in jsPsych.init + +// also works with numeric key codes (but note that numeric keyCode values are now deprecated) jsPsych.pluginAPI.compareKeys('a', 65); // returns true -jsPsych.pluginAPI.convertKeyCharacterToKeyCode('space', 31) +jsPsych.pluginAPI.compareKeys('space', 31); // returns false ``` ---- -## jsPsych.pluginAPI.convertKeyCharacterToKeyCode +#### Comparing a key response and key parameter value in plugins +```javascript +// this is the callback_function passed to jsPsych.pluginAPI.getKeyboardResponse +var after_response = function(info) { + // score the response by comparing the key that was pressed against the trial's key_answer parameter + var correct = jsPsych.pluginAPI.compareKeys(trial.key_answer, info.key); + //... +} ``` -jsPsych.pluginAPI.convertKeyCharacterToKeyCode(character) -``` - -### Parameters - -Parameter | Type | Description -----------|------|------------ -character | string | The string representation of keyboard key. -### Return value - -Returns the numeric keycode associated with the `character` parameter. - -### Description - -Converts between the string representation of a key and the numeric key code associated with that key. - -### Examples +#### Scoring a key response in experiment code ```javascript -var keycode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode('a') -// keycode is 65 - -keycode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode('space') -// keycode is 32 +var trial = { + type: 'html-keyboard-response', + stimulus: '<<<<<', + choices: ['f','j'], + prompt: 'Press f for left. Press j for right.', + on_finish: function(data){ + // score the response by comparing the key that was pressed (data.response) against the + // correct response for this trial ('f'), and store reponse accuracy in the trial data + if(jsPsych.pluginAPI.compareKeys(data.response, 'f')){ + data.correct = true; + } else { + data.correct = false; + } + } +} ``` --- -## jsPsych.pluginAPI.convertKeyCodeToKeyCharacter -``` -jsPsych.pluginAPI.convertKeyCodeToKeyCharacter(character) +## jsPsych.pluginAPI.getAudioBuffer + +```javascript +jsPsych.pluginAPI.getAudioBuffer(filepath) ``` ### Parameters Parameter | Type | Description ----------|------|------------ -code | numeric | The numeric representation of keyboard key. +filepath | string | The path to the audio file that was preloaded. ### Return value -Returns the string representation of the key associated with the `code` parameter. +Returns a Promise that resolves when the audio file loads. Success handler's parameter will be the audio buffer. If the experiment is running using the WebAudio API it will be an AudioBuffer object. Otherwise, it will be an HTML5 Audio object. The failure handler's parameter is the error generated by `preloadAudio`. ### Description -Converts between the numeric key code of a key and the string representation associated with that key. +Gets an AudioBuffer that can be played with the WebAudio API or an Audio object that can be played with HTML5 Audio. + +It is strongly recommended that you preload audio files before calling this method. This method will load the files if they are not preloaded, but this may result in delays during the experiment as audio is downloaded. ### Examples +#### HTML 5 Audio + ```javascript -var keycode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode(65) -// key is 'a' +jsPsych.pluginAPI.getAudioBuffer('my-sound.mp3') + .then(function(audio){ + audio.play(); + }) + .catch(function(err){ + console.error('Audio file failed to load') + }) +``` + +#### WebAudio API -keycode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode(32) -// keycode is 'space' +```javascript +var context = jsPsych.pluginAPI.audioContext(); + +jsPsych.pluginAPI.getAudioBuffer('my-sound.mp3') + .then(function(buffer){ + audio = context.createBufferSource(); + audio.buffer = buffer; + audio.connect(context.destination); + audio.start(context.currentTime); + }) + .catch(function(err){ + console.error('Audio file failed to load') + }) ``` +See the `audio-keyboard-response` plugin for an example in a fuller context. + --- -## jsPsych.pluginAPI.getAudioBuffer -``` -jsPsych.pluginAPI.getAudioBuffer(filepath) +## jsPsych.pluginAPI.getAutoPreloadList + +```javascript +jsPsych.pluginAPI.getAutoPreloadList(timeline) ``` ### Parameters Parameter | Type | Description ----------|------|------------ -filepath | string | The path to the audio file that was preloaded. +timeline | array | An array containing the trial object(s) from which a list of media files should be automatically generated. This array can contain the entire experiment timeline, or any individual parts of a larger timeline, such as specific timeline nodes and trial objects. ### Return value -Returns buffered audio file for playback. If the browser supports it the buffer will be playable with the WebAudio API. Otherwise, the returned buffer will be an HTML5 Audio object. +An object with properties for each media type: `images`, `audio`, and `video`. Each property contains an array of the unique files of that media type that were automatically extracted from the timeline. If no files are found in the timeline for a particular media type, then the array will be empty for that type. ### Description -Gets an AudioBuffer that can be played with the WebAudio API or an Audio object that can be played with HTML5 Audio. The file must be preloaded with `preloadAudioFiles` or the automatic preload (`autoPreload`). +This method is used to automatically generate lists of unique image, audio, and video files from a timeline. It is used by the `preload` plugin to generate a list of to-be-preloaded files based on the trials passed to the `trials` parameter and/or the experiment timeline passed to `jsPsych.init` (when `auto_preload` is true). It can be used in custom plugins and experiment code to generate a list of audio/image/video files, based on a timeline. -### Examples +This function will only return files from plugins that have used the `registerPreload` method to define the media types of their parameters, and only when the trial's parameter value is not a function. When a file path is returned to the trial parameter from a function (including by `jsPsych.timelineVariable` function), or when the file path is embedded in an HTML string, that file will not be detected by the `getAutoPreloadList` method. In these cases, the file should be preloaded manually. See [Media Preloading](../overview/media-preloading.md) for more information. + +### Example ```javascript -// the code below is used to play audio in the audio-keyboard-response plugin -var source = context.createBufferSource(); -source.buffer = jsPsych.pluginAPI.getAudioBuffer(trial.stimulus); -source.connect(context.destination); -startTime = context.currentTime; -source.start(startTime); +var audio_trial = { + type: 'audio-keyboard-response' + stimulus: 'file.mp3' +} + +var image_trial = { + type: 'image-keyboard-response' + stimulus: 'file.png' +} + +var video_trial = { + type: 'video-keyboard-response' + stimulus: 'file.mp4' +} + +var timeline = [audio_trial, image_trial, video_trial]; + +jsPsych.pluginAPI.getAutoPreloadList(timeline); ``` --- + ## jsPsych.pluginAPI.getKeyboardResponse -``` +```javascript jsPsych.pluginAPI.getKeyboardResponse(parameters) ``` @@ -274,11 +299,14 @@ Gets a keyboard response from the subject, recording the response time from when The keyboard event listener will be bound to the `display_element` declared in `jsPsych.init()` (or the `` element if no `display_element` is specified). This allows jsPsych experiments to be embedded in websites with other content without disrupting the functionality of other UI elements. -A valid response triggers the `callback_function` specified in the parameters. A single argument is passed to the callback function. The argument contains an object with the properties `key` and `rt`. `key` contains the numeric key code of the response, and `rt` contains the response time. +A valid response triggers the `callback_function` specified in the parameters. A single argument is passed to the callback function. The argument contains an object with the properties `key` and `rt`. `key` contains the string representation of the response key, and `rt` contains the response time. + +This function uses the `.key` value of the keyboard event, which is _case sensitive_. When `case_sensitive_responses` is `false` in `jsPsych.init` (the default), this function will convert both the `valid_responses` strings and the response key to lowercase before comparing them, and it will pass the lowercase version of the response key to the `callback_function`. For example, if `valid_responses` is `['a']`, then both 'A' and 'a' will be considered valid key presses, and 'a' will be returned as the response key. When `case_sensitive_responses` is `true` in `jsPsych.init`, this function will not convert the case when comparing the `valid_responses` and response key, and it will not convert the case of the response key that is passed to the `callback_function`. For example, if `valid_responses` is `['a']`, then 'a' will be the only valid key press, and 'A' (i.e. 'a' with CapsLock on or Shift held down) will not be accepted. Also, if `valid_responses` includes multiple letter case options (e.g. `jsPsych.ALL_KEYS`), then you may need to check the response key against both letter cases when scoring etc., e.g. `if (response == 'ArrowLeft' || response =='arrowleft') ...`. ### Examples #### Get a single response from any key + ```javascript var after_response = function(info){ @@ -293,13 +321,14 @@ jsPsych.pluginAPI.getKeyboardResponse({ }); ``` -#### Get a responses from a key until the letter Q is pressed +#### Get a responses from a key until the letter q is pressed + ```javascript var after_response = function(info){ alert('You pressed key '+info.key+' after '+info.rt+'ms'); - if(info.key == 81){ // the key code for 'Q' is 81. + if(jsPsych.pluginAPI.compareKeys(info.key,'q')){ / jsPsych.pluginAPI.cancelKeyboardResponse(listener); } } @@ -313,10 +342,11 @@ var listener = jsPsych.pluginAPI.getKeyboardResponse({ ``` --- -## jsPsych.pluginAPI.preloadAudioFiles -``` -jsPsych.pluginAPI.preloadAudioFiles(files, callback_complete, callback_load) +## jsPsych.pluginAPI.preloadAudio + +```javascript +jsPsych.pluginAPI.preloadAudio(files, callback_complete, callback_load, callback_error) ``` ### Parameters @@ -325,7 +355,8 @@ Parameter | Type | Description ----------|------|------------ files | array | An array of audio file paths to load. The array can be nested (e.g., if images are in multiple arrays to help sort by condition or task). callback_complete | function | A function to execute when all the files have been loaded. -callback_load | function | A function to execute after each file has been loaded. A single parameter is passed to this function which contains the number of files that have been loaded so far. +callback_load | function | A function to execute after a single file has been loaded. A single parameter is passed to this function which is the file source (string) that has loaded. +callback_error | function | A function to execute after a single file has produced an error. A single parameter is passed to this function which is the file source (string) that produced the error. ### Return value @@ -333,38 +364,43 @@ Returns nothing. ### Description -Use this function to preload audio files that are not part of a plugin with automatic preloading. Audio files in official plugins will automatically preload. See [Media Preloading](../overview/media-preloading.md) for more information. +This function is used to preload audio files. It is used by the `preload` plugin, and could be called directly to preload audio files in custom plugins or experiment. See [Media Preloading](../overview/media-preloading.md) for more information. It is possible to run this function without specifying a callback function. However, in this case the code will continue executing while the files are loaded. Thus, it is possible that an audio file would be required for playing before it is done preloading. The `callback_complete` function will only execute after all the audio files are loaded, and can be used to control the flow of the experiment (e.g., by starting the experiment in the `callback_complete` function). -The `callback_load` function can be used to indicate progress. See example below. +The `callback_load` and `callback_error` functions are called after each file has either loaded or produced an error, so these functions can also be used to monitor loading progress. See example below. ### Examples #### Basic use -```javascript +```javascript var sounds = ['file1.mp3', 'file2.mp3', 'file3.mp3']; -jsPsych.pluginAPI.preloadAudioFiles(sounds, function(){ startExperiment(); }); +jsPsych.pluginAPI.preloadAudio(sounds, + function(){ startExperiment(); }, + function(file){ console.log('file loaded: ', file); } + function(file){ console.log('error loading file: ', file); } +); function startExperiment(){ jsPsych.init({ timeline: exp }); } - ``` #### Show progress of loading ```javascript var sounds = ['file1.mp3', 'file2.mp3', 'file3.mp3']; +var n_loaded = 0; -jsPsych.pluginAPI.preloadAudioFiles(sounds, function(){ startExperiment(); }, function(nLoaded) { updateLoadedCount(nLoaded); }); +jsPsych.pluginAPI.preloadAudio(sounds, function(){ startExperiment(); }, function(file) { updateLoadedCount(file); }); -function updateLoadedCount(nLoaded){ - var percentcomplete = nLoaded / sounds.length * 100; +function updateLoadedCount(file){ + n_loaded++; + var percentcomplete = n_loaded / sounds.length * 100; // could put something fancier here, like a progress bar // or updating text in the DOM. @@ -372,18 +408,18 @@ function updateLoadedCount(nLoaded){ } function startExperiment(){ - jsPsych.init({ - timeline: exp - }); + jsPsych.init({ + timeline: exp + }); } ``` - --- + ## jsPsych.pluginAPI.preloadImages -``` -jsPsych.pluginAPI.preloadImages(images, callback_complete, callback_load) +```javascript +jsPsych.pluginAPI.preloadImages(images, callback_complete, callback_load, callback_error) ``` ### Parameters @@ -392,7 +428,8 @@ Parameter | Type | Description ----------|------|------------ images | array | An array of image paths to load. The array can be nested (e.g., if images are in multiple arrays to help sort by condition or task). callback_complete | function | A function to execute when all the images have been loaded. -callback_load | function | A function to execute after each image has been loaded. A single parameter is passed to this function which contains the number of images that have been loaded so far. +callback_load | function | A function to execute after a single file has been loaded. A single parameter is passed to this function which is the file source (string) that has loaded. +callback_error | function | A function to execute after a single file has produced an error. A single parameter is passed to this function which is the file source (string) that produced the error. ### Return value @@ -400,38 +437,43 @@ Returns nothing. ### Description -Use this function to preload image files that are not part of a plugin with automatic preloading. Image files in official plugins will automatically preload. See [Media Preloading](../overview/media-preloading.md) for more information. +This function is used to preload image files. It is used by the `preload` plugin, and could be called directly to preload image files in custom plugins or experiment code. See [Media Preloading](../overview/media-preloading.md) for more information. It is possible to run this function without specifying a callback function. However, in this case the code will continue executing while the images are loaded. Thus, it is possible that an image would be required for display before it is done preloading. The `callback_complete` function will only execute after all the images are loaded, and can be used to control the flow of the experiment (e.g., by starting the experiment in the `callback_complete` function). -The `callback_load` function can be used to indicate progress, if the number of images to be loaded is known ahead of time. See example below. +The `callback_load` and `callback_error` functions are called after each file has either loaded or produced an error, so these functions can also be used to monitor loading progress. See example below. ### Examples #### Basic use -```javascript +```javascript var images = ['img/file1.png', 'img/file2.png', 'img/file3.png']; -jsPsych.pluginAPI.preloadImages(images, function(){ startExperiment(); }); +jsPsych.pluginAPI.preloadImages(images, + function(){ startExperiment(); }, + function(file){ console.log('file loaded: ', file); } + function(file){ console.log('error loading file: ', file); } +); function startExperiment(){ jsPsych.init({ timeline: exp }); } - ``` #### Show progress of loading ```javascript var images = ['img/file1.png', 'img/file2.png', 'img/file3.png']; +var n_loaded = 0; -jsPsych.pluginAPI.preloadImages(images, function(){ startExperiment(); }, function(nLoaded) { updateLoadedCount(nLoaded); }); +jsPsych.pluginAPI.preloadImages(images, function(){ startExperiment(); }, function(file) { updateLoadedCount(file); }); -function updateLoadedCount(nLoaded){ - var percentcomplete = nLoaded / images.length * 100; +function updateLoadedCount(file){ + n_loaded++; + var percentcomplete = n_loaded / images.length * 100; // could put something fancier here, like a progress bar // or updating text in the DOM. @@ -439,17 +481,91 @@ function updateLoadedCount(nLoaded){ } function startExperiment(){ - jsPsych.init({ - timeline: exp - }); + jsPsych.init({ + timeline: exp + }); } ``` --- -## jsPsych.pluginAPI.registerPreload +## jsPsych.pluginAPI.preloadVideo + +```javascript +jsPsych.pluginAPI.preloadVideo(video, callback_complete, callback_load, callback_error) ``` -jsPsych.pluginAPI.registerPreload(plugin_name, parameter, media_type, conditional_function) + +### Parameters + +Parameter | Type | Description +----------|------|------------ +video | array | An array of video paths to load. The array can be nested (e.g., if videos are in multiple arrays to help sort by condition or task). +callback_complete | function | A function to execute when all the videos have been loaded. +callback_load | function | A function to execute after a single file has been loaded. A single parameter is passed to this function which is the file source (string) that has loaded. +callback_error | function | A function to execute after a single file has produced an error. A single parameter is passed to this function which is the file source (string) that produced the error. + +### Return value + +Returns nothing. + +### Description + +This function is used to preload video files. It is used by the `preload` plugin, and could be called directly to preload video files in custom plugins or experiment code. See [Media Preloading](../overview/media-preloading.md) for more information. + +It is possible to run this function without specifying a callback function. However, in this case the code will continue executing while the videos are loaded. Thus, it is possible that a video would be requested before it is done preloading. The `callback_complete` function will only execute after all the videos are loaded, and can be used to control the flow of the experiment (e.g., by starting the experiment in the `callback_complete` function). + +The `callback_load` and `callback_error` functions are called after each file has either loaded or produced an error, so these functions can also be used to monitor loading progress. See example below. + +### Examples + +#### Basic use + +```javascript +var videos = ['vid/file1.mp4', 'vid/file2.mp4', 'vid/file3.mp4']; + +jsPsych.pluginAPI.preloadVideo(videos, + function(){ startExperiment(); }, + function(file){ console.log('file loaded: ', file); } + function(file){ console.log('error loading file: ', file); } +); + +function startExperiment(){ + jsPsych.init({ + timeline: exp + }); +} +``` + +#### Show progress of loading + +```javascript +var videos = ['vid/file1.mp4', 'vid/file2.mp4', 'vid/file3.mp4']; +var n_loaded = 0; + +jsPsych.pluginAPI.preloadVideo(videos, function(){ startExperiment(); }, function(file) { updateLoadedCount(file); }); + +function updateLoadedCount(file){ + n_loaded++; + var percentcomplete = n_loaded / videos.length * 100; + + // could put something fancier here, like a progress bar + // or updating text in the DOM. + console.log('Loaded '+percentcomplete+'% of videos'); +} + +function startExperiment(){ + jsPsych.init({ + timeline: exp + }); +} +``` + +--- + +## jsPsych.pluginAPI.registerPreload + +```javascript +jsPsych.pluginAPI.registerPreload(plugin_name, parameter, media_type) ``` ### Parameters @@ -458,8 +574,7 @@ Parameter | Type | Description ----------|------|------------ plugin_name | string | The name of the plugin. e.g., 'image-keyboard-response'. parameter | string | The name of the parameter that is a media file. e.g., 'stimulus' -media_type | string | The type of media, either 'image' or 'audio'. -conditional_function | function | Only run the preload for a trial if this function returns true, or if this function does not exist. +media_type | string | The type of media, either 'image', 'audio' or 'video'. ### Return value @@ -469,16 +584,15 @@ Nothing. Use this method in a plugin file to mark a parameter as containing an element that should be preloaded. The method should be called in the plugin file such that it gets called when the file is loaded. -The `conditional_function` function is passed a single argument containing the trial object. - ### Example -For an example, see the [image-keyboard-response](https://github.com/jodeleeuw/jsPsych/blob/master/plugins/jspsych-image-keyboard-response.js) and [audio-keyboard-response](https://github.com/jodeleeuw/jsPsych/blob/master/plugins/jspsych-audio-keyboard-response.js) plugins. +For an example, see the [image-keyboard-response](https://github.com/jspsych/jsPsych/blob/master/plugins/jspsych-image-keyboard-response.js) and [audio-keyboard-response](https://github.com/jspsych/jsPsych/blob/master/plugins/jspsych-audio-keyboard-response.js) plugins. --- + ## jsPsych.pluginAPI.setTimeout -``` +```javascript jsPsych.pluginAPI.setTimeout(callback, delay) ``` @@ -497,7 +611,7 @@ Returns the ID of the setTimeout handle. This is simply a call to the standard setTimeout function in JavaScript with the added benefit of registering the setTimeout call in a central list. This is useful for scenarios where some other event (the trial ending, aborting the experiment) should stop the execution of queued timeouts. -### Examples +### Example ```javascript // print the time diff --git a/docs/core_library/jspsych-randomization.md b/docs/core_library/jspsych-randomization.md index a78258a4a6..cfbd6dd6e2 100644 --- a/docs/core_library/jspsych-randomization.md +++ b/docs/core_library/jspsych-randomization.md @@ -6,17 +6,17 @@ The jsPsych.randomization module contains methods that are useful for generating ## jsPsych.randomization.factorial -``` +```javascript jsPsych.randomization.factorial(factors, repetitions, unpack) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -factors | object | The `factors` object should contain a property for each different factor. Each property-factor should have a value of an array, with each element of the array corresponding to a level of the factor. -repetitions | integer | The number of times to repeat each unique combination of the factors in the output sample. -unpack | boolean | If `true` then the output will be an object with a property for each factor in the original `factors` object. The value of each property-factor will be an array containing the levels of the factor in a random order. The order will be consistent across each property-factor (e.g., the first element of each property-factor will specify one unique combination of the factors). If `false`, then the return value will be an array of objects where each property-factor contains only a single value. +| Parameter | Type | Description | +| ----------- | ------- | ---------------------------------------- | +| factors | object | The `factors` object should contain a property for each different factor. Each property-factor should have a value of an array, with each element of the array corresponding to a level of the factor. | +| repetitions | integer | The number of times to repeat each unique combination of the factors in the output sample. | +| unpack | boolean | If `true` then the output will be an object with a property for each factor in the original `factors` object. The value of each property-factor will be an array containing the levels of the factor in a random order. The order will be consistent across each property-factor (e.g., the first element of each property-factor will specify one unique combination of the factors). If `false`, then the return value will be an array of objects where each property-factor contains only a single value. | ### Return value @@ -29,6 +29,7 @@ This method takes a list of factors and their levels, and creates a full factori ### Examples #### Create full factorial design + ```javascript var factors = { stimulus: ['a.jpg', 'b.jpg'], @@ -49,6 +50,7 @@ full_design = [ ``` #### Create full factorial design with repeats + ```javascript var factors = { stimulus: ['a.jpg', 'b.jpg'], @@ -73,6 +75,7 @@ full_design = [ ``` #### Create full factorial design, unpacked + ```javascript var factors = { stimulus: ['a.jpg', 'b.jpg'], @@ -91,17 +94,18 @@ full_design = { ``` --- + ## jsPsych.randomization.randomID -``` +```javascript jsPsych.randomization.randomID(length) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -length | integer | The length of the randomly generated ID +| Parameter | Type | Description | +| --------- | ------- | --------------------------------------- | +| length | integer | The length of the randomly generated ID | ### Return value @@ -114,29 +118,28 @@ Generates a random string that is likely to be unique. If length is undefined, t ### Example ```javascript - console.log(jsPsych.randomization.randomID()); // outputs: "t7dwz0e713pc8juuaayyfvpkdd9un239" console.log(jsPsych.randomization.randomID(8)); // outputs: "3xtpcbck" - ``` --- + ## jsPsych.randomization.repeat -``` +```javascript jsPsych.randomization.repeat(array, repetitions, unpack) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -array | array | The array of values to randomize & repeat. -repetitions | integer or array | The number of times to repeat each element of the `array` in the final sample. If this parameter is defined as an integer, then each element of `array` is repeated the same number of times. This parameter can also be an array of the same length as `array`, in which case each element of `array` will be repeated the number of times defined in the corresponding position of the `repetitions` array. -unpack | boolean | If each element of `array` is an object with an equivalent set of properties, then setting `unpack` to `true` will make the return value an object with a property for each of the unique properties among the elements of the `array`. Each property in the output object will be an array containing the values for that property in the randomized order. The order will be consistent across properties. If this is `false` then the output is just an array containing a randomized order of the original `array` elements. +| Parameter | Type | Description | +| ----------- | ---------------- | ---------------------------------------- | +| array | array | The array of values to randomize & repeat. | +| repetitions | integer or array | The number of times to repeat each element of the `array` in the final sample. If this parameter is defined as an integer, then each element of `array` is repeated the same number of times. This parameter can also be an array of the same length as `array`, in which case each element of `array` will be repeated the number of times defined in the corresponding position of the `repetitions` array. | +| unpack | boolean | If each element of `array` is an object with an equivalent set of properties, then setting `unpack` to `true` will make the return value an object with a property for each of the unique properties among the elements of the `array`. Each property in the output object will be an array containing the values for that property in the randomized order. The order will be consistent across properties. If this is `false` then the output is just an array containing a randomized order of the original `array` elements. | ### Return value @@ -153,42 +156,37 @@ If the array elements are objects with the same set of properties, then this met #### Shuffle an array, no repeats ```javascript - var myArray = [1,2,3,4,5]; var shuffledArray = jsPsych.randomization.repeat(myArray, 1); - // output: shuffledArray = [3,2,4,1,5] ``` #### Shuffle an array with repeats ```javascript - var myArray = [1,2,3,4,5]; var shuffledArray = jsPsych.randomization.repeat(myArray, 2); - // output: shuffledArray = [1,3,4,2,2,4,5,1,5,3] ``` #### Shuffle an array of objects ```javascript - var trial1 = { stimulus: 'img/faceA.jpg', - correct_key: 80, + correct_key: 'p', person_name: 'Joe' } var trial2 = { stimulus: 'img/faceB.jpg', - correct_key: 80, + correct_key: 'p', person_name: 'Fred' } var trial3 = { stimulus: 'img/faceC.jpg', - correct_key: 81, + correct_key: 'q', person_name: 'Mary' } @@ -201,22 +199,21 @@ var shuffledArray = jsPsych.randomization.repeat(myArray, 2); #### Shuffle an array of objects, with unpack ```javascript - var trial1 = { stimulus: 'img/faceA.jpg', - correct_key: 80, + correct_key: 'p', person_name: 'Joe' } var trial2 = { stimulus: 'img/faceB.jpg', - correct_key: 80, + correct_key: 'p', person_name: 'Fred' } var trial3 = { stimulus: 'img/faceC.jpg', - correct_key: 81, + correct_key: 'q', person_name: 'Mary' } @@ -226,25 +223,27 @@ var shuffledArray = jsPsych.randomization.repeat(myArray, 2, true); /* output: shuffledArray = { stimulus: ['img/faceB.jpg','img/faceA.jpg','img/faceC.jpg','img/faceA.jpg','img/faceC.jpg','img/faceB.jpg'], - correct_key: [80, 80, 81, 80, 81, 80], - person_name: ['Fred','Joe', 'Mary', 'Joe', 'Mary', 'Fred'] + correct_key: ['p', 'p', 'q', 'p', 'q', 'p'], + person_name: ['Fred', 'Joe', 'Mary', 'Joe', 'Mary', 'Fred'] } */ ``` + --- + ## jsPsych.randomization.sampleWithReplacement -``` +```javascript jsPsych.randomization.sampleWithReplacement(array, sampleSize, weights) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -array | array | The array of values to sample from -sampleSize | numeric | The number of samples to draw -weights | array | The relative weight of each element in `array`. This array is normalized, so the values do not need to sum to 1. The length must match the length of `array`. +| Parameter | Type | Description | +| ---------- | ------- | ---------------------------------------- | +| array | array | The array of values to sample from | +| sampleSize | numeric | The number of samples to draw | +| weights | array | The relative weight of each element in `array`. This array is normalized, so the values do not need to sum to 1. The length must match the length of `array`. | ### Return value @@ -259,36 +258,33 @@ This method returns a sample drawn at random from a set of values with replaceme #### Sample with equal probability ```javascript - var myArray = [1,2,3,4,5]; var sample = jsPsych.randomization.sampleWithReplacement(myArray, 10); - // output: sample = [3, 1, 2, 2, 5, 1, 4, 3, 1, 5]; ``` #### Sample with unequal probability ```javascript - var myArray = [1,2,3,4,5]; var sample = jsPsych.randomization.sampleWithReplacement(myArray, 10, [6,1,1,1,1]); - // output: sample = [3, 4, 5, 1, 2, 1, 3, 1, 1, 1]; ``` --- + ## jsPsych.randomization.sampleWithoutReplacement -``` +```javascript jsPsych.randomization.sampleWithoutReplacement(array, sampleSize) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -array | array | The array of values to sample from -sampleSize | numeric | The number of samples to draw +| Parameter | Type | Description | +| ---------- | ------- | ---------------------------------- | +| array | array | The array of values to sample from | +| sampleSize | numeric | The number of samples to draw | ### Return value @@ -303,25 +299,24 @@ This method returns a sample drawn at random from a set of values without replac #### Sample without replacement ```javascript - var myArray = [1,2,3,4,5]; var sample = jsPsych.randomization.sampleWithoutReplacement(myArray, 2); - // output: sample = [3,2]; ``` --- + ## jsPsych.randomization.shuffle -``` +```javascript jsPsych.randomization.shuffle(array) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -array | array | The array of values to shuffle +| Parameter | Type | Description | +| --------- | ----- | ------------------------------ | +| array | array | The array of values to shuffle | ### Return value @@ -336,26 +331,25 @@ A simple method for shuffling the order of an array. #### Shuffle an array ```javascript - var myArray = [1,2,3,4,5]; var shuffledArray = jsPsych.randomization.shuffle(myArray); - // output: shuffledArray = [3,2,4,1,5] ``` --- + ## jsPsych.randomization.shuffleNoRepeats -``` +```javascript jsPsych.randomization.shuffleNoRepeats(array, equalityTest) ``` ### Parameters -Parameter | Type | Description -----------|------|------------ -array | array | The array of values to shuffle -equalityTest | function | A function to use to evaluate the equality of neighbors in the array. The function should accept two parameters, which are the two elements to be tested. It should return `true` if they are equal and `false` if not. The default function, if none is specified, is to use the `===` operator. This will work for primitive values, but fail for Objects and Arrays. An example function is given below in the examples. +| Parameter | Type | Description | +| ------------ | -------- | ---------------------------------------- | +| array | array | The array of values to shuffle | +| equalityTest | function | A function to use to evaluate the equality of neighbors in the array. The function should accept two parameters, which are the two elements to be tested. It should return `true` if they are equal and `false` if not. The default function, if none is specified, is to use the `===` operator. This will work for primitive values, but fail for Objects and Arrays. An example function is given below in the examples. | ### Return value @@ -372,10 +366,8 @@ Shuffle an array, ensuring that neighboring elements in the array are different. #### Basic example ```javascript - var myArray = [1,2,3,4,5,1,2,3,4,5,1,2,3,4,5]; var shuffledArray = jsPsych.randomization.shuffleNoRepeats(myArray); - // output: shuffledArray = [2, 3, 5, 1, 2, 4, 1, 5, 4, 1, 3, 5, 4, 3, 2] ``` diff --git a/docs/core_library/jspsych-turk.md b/docs/core_library/jspsych-turk.md index a4bb9f60b3..079de77776 100644 --- a/docs/core_library/jspsych-turk.md +++ b/docs/core_library/jspsych-turk.md @@ -3,9 +3,10 @@ The jsPsych.turk module contains functions for interacting with Mechanical Turk. --- + ## jsPsych.turk.submitToTurk -``` +```javascript jsPsych.turk.submitToTurk(data) ``` @@ -13,7 +14,7 @@ jsPsych.turk.submitToTurk(data) Parameter | Type | Description ----------|------|------------ -data | object | The `data` parameter is an object of `key: value` pairs. Any pairs in the `data` parameter will be saved by Mechanical Turk, and can be downloaded in a CSV file through the Mechanical Turk interface. +data | object | The `data` parameter is an object of `key: value` pairs. Any pairs in the `data` parameter will be saved by Mechanical Turk, and can be downloaded in a CSV file through the Mechanical Turk interface. **Important**: the `data` parameter must contain at least one `key: value` pair, even just a dummy value, or the HIT will not be submitted correctly. ### Return value @@ -28,7 +29,6 @@ This method will only work when called from within the mechanical turk website. ### Example ```html -

Enter the code you were given:

@@ -52,7 +52,7 @@ function sendData() { ## jsPsych.turk.turkInfo -``` +```javascript jsPsych.turk.turkInfo() ``` @@ -78,7 +78,6 @@ This method returns basic information about the current Mechanical Turk session, ### Example ```javascript - var turkInfo = jsPsych.turk.turkInfo(); alert('Worker ID is: ' + turkInfo.workerId); @@ -97,6 +96,3 @@ alert('Preview mode? ' + turkInfo.previewMode); // false otherwise. alert('Outside turk? ' + turkInfo.outsideTurk); ``` - - - diff --git a/docs/extensions/extensions.md b/docs/extensions/extensions.md new file mode 100644 index 0000000000..a4d4fc5a5c --- /dev/null +++ b/docs/extensions/extensions.md @@ -0,0 +1,83 @@ +# Extensions + +Extensions are jsPsych modules that can interface with any plugin to extend the functionality of the plugin. A canonical example of an extension is eye tracking. An eye tracking extension allows a plugin to gather gaze data and add it to the plugin's data object. + +## Using an Extension + +To use an extension in an experiment, you'll load the extension file via a ` + + +``` + +```js +jsPsych.init({ + timeline: [...], + extensions: [ + {type: 'some-extension', params: {...} } + ] +}) +``` + +To enable an extension during a trial, add the extension to the `extensions` list for the trial. Some extensions may also support or require an object of parameters to configure the extension: + +```js +var trial = { + extensions: [ + {type: 'some-extension', params: {...} } + ] +} +``` + +## List of Extensions + +Extension | Description +------ | ----------- +[jspsych‑ext‑webgazer.js](../extensions/jspsych-ext-webgazer.md) | Enables eye tracking using the [WebGazer](https://webgazer.cs.brown.edu/) library. + +## Writing an Extension + +To create a new extension you must create an object that supports a few event callbacks. A barebones extension file looks like this: + +```js +jsPsych.extensions['new-extension'] = (function () { + + var extension = {}; + + extension.initialize = function(params){ + // params are passed from the extensions parameter in jsPsych.init + } + + extension.on_start = function(params){ + // params are passed from the extensions parameter in the trial object + } + + extension.on_load = function(params){ + // params are passed from the extensions parameter in the trial object + } + + extension.on_finish = function(params){ + // params are passed from the extensions parameter in the trial object + return { + // any data that the extension returns here will be added to the trial data + } + } + + return extension; +}); +``` + +The four events that an extension must support are shown in the sample code. + +`extension.initialize` is called with `jsPsych.init()`. This is where setup code for the extension can happen. This event will happen once per experiment, unlike the other events which occur with each trial. The `params` object can include whatever parameters are necessary to configure the extension. The `params` object is passed from the call to `jsPsych.init()` to the `extension.initialize` method. `extension.initialize` must return a `Promise` that resolves when the extension is finished initializing. + +`extension.on_start` is called at the start of the plugin execution, prior to calling `plugin.trial`. This is where trial-specific initialization can happen, such as creating empty containers to hold data or resetting internal state. The `params` object is passed from the declaration of the extension in the trial object. You can use `params` to customize the behavior of the extension for each trial. + +`extension.on_load` is called after `plugin.trial` has executed, which is typically when the plugin has finished executing initial DOM-modifying code and has set up various event listeners. This is where the extension can begin actively interacting with the DOM and recording data. The `params` object is passed from the declaration of the extension in the trial object. You can use `params` to customize the behavior of the extension for each trial. + +`extension.on_finish` is called after the plugin completes. This can be used for any teardown at the end of the trial. This method should return an object of data to append to the plugin's data. Note that this event fires *before* the `on_finish` event for the plugin, so data added by the extension is accessible in any trial `on_finish` event handlers. The `params` object is passed from the declaration of the extension in the trial object. You can use `params` to customize the behavior of the extension for each trial. + +The extension can also include any additional methods that are necessary for interacting with it. See the [webgazer extension](../extensions/jspsych-ext-webgazer.md) for an example. \ No newline at end of file diff --git a/docs/extensions/jspsych-ext-webgazer.md b/docs/extensions/jspsych-ext-webgazer.md new file mode 100644 index 0000000000..095fae4a4b --- /dev/null +++ b/docs/extensions/jspsych-ext-webgazer.md @@ -0,0 +1,137 @@ +# jspsych-ext-webgazer + +This extension supports eye tracking through the [WebGazer](https://webgazer.cs.brown.edu/) library. For a narrative description of how to use this extension see the [eye tracking overview](../overview/eye-tracking.md). + +## Parameters + +### Initialization Parameters + +Initialization parameters can be set when calling `jsPsych.init()` + +```js +jsPsych.init({ + extensions: [ + {type: 'webgazer', params: {...}} + ] +}) +``` + +Parameter | Type | Default Value | Description +----------|------|---------------|------------ +webgazer | object | `undefined` | You can explicitly pass a reference to a loaded instance of the webgazer.js library. If no explicit reference is passed then the extension will look for a global `webgazer` object. If you are loading webgazer.js via a ` + + +``` + +### Load the jsPsych webgazer extension + +The [webgazer extension](/extensions/jspsych-ext-webgazer.md) adds functionality to jsPsych for interacting with webgazer. Load it like you would a plugin file. + + +```html + + + + + +``` + +To use the WebGazer extension in an experiment, include it in the list of extensions passed to `jsPsych.init()` + +```js +jsPsych.init({ + timeline: [...], + extensions: [ + {type: 'webgazer'} + ] +}) +``` + + +!!! tip + Example experiments using WebGazer are available in the **/examples** folder of the jsPsych release. See `webgazer.html`, `webgazer_image.html`, and `webgazer_audio.html`. + +### Initialize the camera + +To help the participant position their face correctly for eye tracking you can use the [jspsych-webgazer-init-camera plugin](/plugins/jspsych-webgazer-init-camera.ms). This will show the participant what the camera sees, including facial feature landmarks, and prevent the participant from continuing until their face is in good position for eye tracking. This plugin will also trigger the experiment to request permission to access the user's webcam if it hasn't already been granted. + + +```js +var init_camera_trial = { + type: 'webgazer-init-camera' +} +``` + + +### Calibration + +To calibrate WebGazer, you can use the [jspsych-webgazer-calibrate plugin](/plugins/jspsych-webgazer-calibrate.md). This plugin allows you to specify a set of points on the screen for calibration and to choose the method for calibrating -- either clicking on each point or simply fixating on each point. The location of calibration points is specified in percentages, e.g., `[25,50]` will result in a point that is 25% of the width of the screen from the left edge and 50% of the height of the screen from the top edge. Options for controlling other details of the calibration are explained in the [documentation for the plugin](/plugins/jspsych-webgazer-calibrate.md). + +Note that instructions are not included in the calibration plugin, so you'll likely want to use a different plugin (e.g., `html-button-response`) to display instructions prior to running the calibration. + +```js +var calibration_trial = { + type: 'webgazer-calibrate', + calibration_points: [[25,50], [50,50], [75,50], [50,25], [50,75]], + calibration_mode: 'click' +} +``` + + +### Validation + +To measure the accuracy and precision of the calibration, you can use the [jspsych-webgazer-vaidate plugin](/plugins/jspsych-webgazer-validate.md). Like the calibration plugin, you can specify a list of points to perform validation on. Here you can specify the points as either percentages or in terms of the distance from the center of the screen in pixels. Which mode you use will probably depend on how you are defining your stimuli throughout the experiment. You can also specify the radius of tolerance around each point, and the plugin will calculate the percentage of measured gaze samples within that radius. This is a potentially useful heuristic for deciding whether or not to calibrate again. Options for controlling other details of the validation are explained in the [documentation for the plugin](/plugins/jspsych-webgazer-validate.md). + + +```js +var validation_trial = { + type: 'webgazer-validate', + validation_points: [[-200,200], [200,200],[-200,-200],[200,-200]], + validation_point_coordinates: 'center-offset-pixels', + roi_radius: 100 +} +``` + +The validation procedure stores the raw gaze data for each validation point, the computed average offset from each validation point, the percentage of samples within the `roi_radius` for each validation point, and the number of samples collected per second. + +```js +{ + raw_gaze: [...], + percent_in_roi: [...], + average_offset: [...], + samples_per_sec: ... +} +``` + +We recommend performing calibration and validation periodically throughout your experiment. + +### Adding eye tracking to a trial + +To enable eye tracking for a trial in your experiment, you can simply add the WebGazer extension to the trial. + +```js +var trial = { + type: 'html-keyboard-response', + stimulus: '', + extensions: [ + { + type: 'webgazer', + params: { + targets: ['#scene'] + } + } + ] +} +``` + +This will turn on WebGazer at the start of the trial. + +The `params` property in the `extensions` declaration allows you to pass in a list of [CSS selector strings](https://www.w3schools.com/cssref/css_selectors.asp). The [bounding rectangle](https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect) of the DOM element that matches each selector will be recorded in the data for that trial. This allows for easy alignment of the gaze data and objects on the screen. + +```js +webgazer_targets : { + 'selector': {x: ..., y: ..., height: ..., width: ..., top: ..., left: ..., right: ..., bottom:...} + 'selector': {x: ..., y: ..., height: ..., width: ..., top: ..., left: ..., right: ..., bottom:...} +} +``` + +Gaze data will be added to the trial's data under the property `webgazer_data`. The gaze data is an array of objects. Each object has an `x`, a `y`, and a `t` property. The `x` and `y` properties specify the gaze location in pixels and `t` specifies the time in milliseconds since the start of the trial. Note that establishing the precision and accuracy of these measurements across the variety of web browsers and systems that your experiment participants might be using is quite difficult. For example, different browsers may cause small systematic shifts in the accuracy of `t` values. + +```js +webgazer_data: [ + {x: ..., y: ..., t: ...}, + {x: ..., y: ..., t: ...}, + {x: ..., y: ..., t: ...}, + {x: ..., y: ..., t: ...} +] +``` + +## Tips for Improving Data Quality + +These are some anecdotal observations about factors that improve data quality. + +1. The quality of the camera feed is essential. Good lighting makes a big difference. You may want to encourage participants to perform any eye tracking experiments in a well-lit room. +2. Participants need to keep their head relatively still during and after calibration. The calibration is not robust to head movements. +3. WebGazer's click-based calibration can be used throughout the experiment. You can turn this on by calling `jsPsych.extensions.webgazer.startMouseCalibration()` at any point in the experiment. If you use a continue button to advance through the experiment and move the location of the continue button around you can be making small adjustments to the calibration throughout. +4. Computing the gaze predictions consumes more computational resources than most other things that jsPsych is typically used for. The sampling rate that WebGazer is able to achieve will depend on the computing power of the participant's device. You may want to ask the participant to close any non-essential software and browser windows prior to completing the experiment. You may also want to check that the sampling rate is sufficiently high as part of validation. + +If you have tips based on your own experience please consider sharing them on our [discussion forum](https://github.com/jspsych/jsPsych/discussions) and we'll add to this list! + +## Example + +The code below shows a basic example of what it looks like when you put all of these things together in your experiment's HTML file. + +```html + + + + + + + + + + + + + + + + +``` + +Below is example data from the image-keyboard-response trial taken from the experiment above. In addition to the standard data that is collected for this plugin, you can see the additional `webgazer_data` and `webgazer_targets` arrays. The `webgazer_data` shows 21 gaze location estimates during the 1-second image presentation. The `webgazer_targets` array shows that there was one target, the image-keyboard-response stimulus, and tells you the x- and y-coordinate boundaries for the target (image) rectangle. By comparing each of the x/y locations from the `webgazer_data` locations array with the target boundaries in `webgazer_targets`, you can determine if/when the estimated gaze location was inside the target area. + +```js +{ + "rt": null, + "stimulus": "img/blue.png", + "response": null, + "trial_type": "image-keyboard-response", + "trial_index": 4, + "time_elapsed": 30701, + "internal_node_id": "0.0-4.0", + "webgazer_data": [ + { "x": 1065, "y": 437, "t": 39}, + { "x": 943, "y": 377, "t": 79}, + { "x": 835, "y": 332, "t": 110}, + { "x": 731, "y": 299, "t": 146}, + { "x": 660, "y": 271, "t": 189}, + { "x": 606, "y": 251, "t": 238}, + { "x": 582, "y": 213, "t": 288}, + { "x": 551, "y": 200, "t": 335}, + { "x": 538, "y": 183, "t": 394}, + { "x": 514, "y": 177, "t": 436}, + { "x": 500, "y": 171, "t": 493}, + { "x": 525, "y": 178, "t": 542}, + { "x": 537, "y": 182, "t": 592}, + { "x": 543, "y": 178, "t": 633}, + { "x": 547, "y": 177, "t": 691}, + { "x": 558, "y": 174, "t": 739}, + { "x": 574, "y": 183, "t": 789}, + { "x": 577, "y": 197, "t": 838}, + { "x": 584, "y": 214, "t": 889}, + { "x": 603, "y": 218, "t": 937}, + { "x": 606, "y": 221, "t": 987} + ], + "webgazer_targets": [ + "#jspsych-image-keyboard-response-stimulus": { + "x": 490, + "y": 135, + "height": 300, + "width": 300, + "top": 135, + "bottom": 435, + "left": 490, + "right": 790 + } + ] +} +``` \ No newline at end of file diff --git a/docs/overview/media-preloading.md b/docs/overview/media-preloading.md index d0cba383b8..ce8ea51dc5 100644 --- a/docs/overview/media-preloading.md +++ b/docs/overview/media-preloading.md @@ -1,60 +1,72 @@ # Media Preloading -If an experiment uses image, audio, or video files as stimuli, it is a good idea to preload the files before running the experiment. Preloading files means that the subject's browser will download all of the files and store them in local memory on the subject's computer. This is important because loading a file is much faster if it is already in memory on the subject's computer. Without preloading, there will be noticeable delays in the display of media, which will affect any timing measurements (such as how long an image is displayed, or a subject's response time since first viewing an image). For particularly large files, like video, preloading content avoids lengthy pauses in the middle of the experiment that can be disruptive to the flow of the experiment. +If an experiment uses image, audio, or video files as stimuli, it is a good idea to preload the files before running the experiment. You can preload files at any point in your experiment using the [jsPsych `preload` plugin](../plugins/jspsych-preload.md). Preloading files means that the subject's browser will download the files and store them in local memory on the subject's computer. This is important because displaying or playing a media file is much faster if it is already in memory on the subject's computer. Without preloading, there will be noticeable delays in the display of media, which will affect any timing measurements (such as how long an image is displayed, or a subject's response time since first viewing an image). For particularly large files, like video, preloading content avoids lengthy pauses in the middle of the experiment that can be disruptive to the flow of the experiment. + +!!! warning + Note that video preloading will not work when you run your experiment offline (e.g., by double-clicking on the HTML file), but it will work once your experiment is running online (hosted on a server). The [Cross-origin requests (CORS) and safe mode](running-experiments.md#cross-origin-requests-cors-and-safe-mode) section on the Running Experiments page contains more information about this. ## Automatic Preloading -jsPsych will automatically preload audio, video, and image files that are used as parameters for the standard set of plugins. +jsPsych can automatically preload audio, video, and image files that are used as parameters for the standard set of plugins, based on the timeline that is passed to `jsPsych.init`. You must initiate this preloading using a `preload` trial. You should add this `preload` trial into your timeline when you want the preloading to occur, and set the `auto_preload` parameter to `true`. ```javascript -// the image file img/file1.png is -// automatically preloaded before the experiment begins -var trial = { +// the "auto_preload: true" setting tells the plugin to automatically find +// stimuli to preload based the main experiment timeline (used in jsPsych.init) +var preload = { + type: 'preload', + auto_preload: true +} + +// this image file can be automatically preloaded +var image_trial = { type: 'image-keyboard-response', stimulus: 'img/file1.png' } -// the sound file is also preloaded automatically +// the sound file can be automatically preloaded var sound_trial = { type: 'audio-keyboard-response', stimulus: 'audio/hello.mp3' } -// the video file is preloaded automatically +// the video file can be automatically preloaded (as long as the experiment +// is running on a server) var video_trial = { type: 'video', - sources: ['video/sample_video.mp4'] + stimulus: ['video/sample_video.mp4'] } jsPsych.init({ - timeline: [trial] + timeline: [preload, image_trial, sound_trial, video_trial] }); ``` ## Manual preloading -If you are using media files in your experiment but they are not being passed directly as parameters to the trials (e.g., because you are using functions as parameters that return the audio, video, or images, or you are using timeline variables), then you can manually specify the files to preload. - -You can specify an array of image files (`preload_images`) and an array of audio files (`preload_audio`) for preloading in the `jsPsych.init()` method. These files will load before the experiment starts. +If you are using media files in your experiment but they are not being passed directly as parameters to the trials (e.g., because you are using functions as parameters that return the media files, you are using timeline variables, or you are embedding the media files in an HTML string), then these files will not be detected when you use the `auto_preload` option, so you must manually specify them. The `preload` plugin allows you to add these files using the `images`, `audio` and `video` parameters. ```javascript -// this trial will not preload the images, because the image file is being used -// in an HTML string -var trial = { +// this image file cannot be automatically preloaded because it is embedded in +// an HTML string +var image_trial = { type: 'html-keyboard-response', stimulus: '', } -var audio_trial = { +// this audio file cannot be automatically preloaded because it is returned +// from a function +var sound_trial = { type: 'audio-keyboard-response', - stimulus: function() { return 'audio/foo.mp3' } + stimulus: function() { return 'audio/sound1.mp3' } } -var video_timline = { +// these video files cannot be automatically preloaded because they are passed +// into a trial using the jsPsych.timelineVariable function +var video_trials = { timeline: [ { type: 'video', - sources: jsPsych.timelineVariable('video') + stimulus: jsPsych.timelineVariable('video') } ], timeline_variables: [ @@ -63,27 +75,295 @@ var video_timline = { ] } -// an array of paths to images that need to be loaded +// to manually preload media files, create an array of file paths for each +// media type var images = ['img/file1.png']; -var audio = ['audio/foo.mp3']; +var audio = ['audio/sound1.mp3']; var video = ['video/1.mp4', 'video/2.mp4']; +// these array can be passed into the preload plugin using the images, audio +// and video parameters +var preload = { + type: 'preload', + images: images, + audio: audio, + video: video +} + jsPsych.init({ - timeline: [trial], - preload_audio: audio, - preload_images: images, - preload_video: video + timeline: [preload, image_trial, sound_trial, video_trials], }); ``` -## Preloading progress bar +## Combining automatic and manual preloading -By default, jsPsych will display a small progress bar while files are being preloaded. This progress bar represents all files that are being automatically preloaded or preloaded from the `preload_audio`, `preload_video`, and `preload_images` arrays. You may wish to turn this off if you are only loading a small number of files, as it will disappear so quickly that the participant may be confused about what it was. You can control whether the preloading progress bar appears by setting the `show_preload_progress_bar` parameter in `jsPsych.init()` +It's possible to combine automatic and manual preloading. For instance, you may want to automatically preload all of the media files based on your experiment timeline, while also manually preloading any files that can't be automatically preloaded. Any duplicate file names across all preloading methods will be removed before preloading starts, so including the same file names in multiple `preload` parameters will not affect the preloading duration. ```javascript +// this file can be preloaded automatically +var image_trial = { + type: 'image-keyboard-response', + stimulus: 'img/file1.png' +} + +// this file can be preloaded automatically +var sound_trial = { + type: 'audio-keyboard-response', + stimulus: 'audio/hello.mp3' +} + +// these files must be preloaded manually +var video_trials = { + timeline: [ + { + type: 'video', + stimulus: jsPsych.timelineVariable('video') + } + ], + timeline_variables: [ + {video: ['video/1.mp4']}, + {video: ['video/2.mp4']} + ] +} + +var video = ['video/1.mp4', 'video/2.mp4']; + +var preload = { + type: 'preload', + auto_preload: true, // automatically preload the image and audio files + video: video // manually preload the videos used with timeline variables +} + jsPsych.init({ - timeline: timeline, - show_preload_progress_bar: false // hide preload progress bar + timeline: [preload, image_trial, sound_trial, video_trials], }); + ``` + +## Preloading in batches + +Some experiments use many and/or large media files. This can cause problems when participants have slow and/or unreliable internet connections, because it increases the chances of loading errors during preloading. This can also cause problems with file caching, i.e. ensuring that the preloaded files remain in the browser's memory, because loading all stimuli at once may exceed the browser's cache limits. One option for mitigating these problems is to load the media files in smaller batches throughout the experiment. Files should be preloaded as close as possible to when they will be needed. For instance, if you have several blocks of trials, then right before each block, you can preload the stimuli that are needed for that block. + +Here is an example with trials where the stimuli files can be preloaded automatically. In this case, the `trials` parameter is used to tell the `preload` plugin to preload the stimuli from a specific part of the timeline. + +```javascript +// these image files in these trial blocks can be automatically preloaded +var block_1 = { + timeline: [ + { + type: 'image-keyboard-response', + stimulus: 'img/file1.png' + }, + { + type: 'image-keyboard-response', + stimulus: 'img/file2.png' + } + ] +} + +var block_2 = { + timeline: [ + { + type: 'image-keyboard-response', + stimulus: 'img/file3.png' + }, + { + type: 'image-keyboard-response', + stimulus: 'img/file4.png' + } + ] +} + +var preload_1 = { + type: 'preload', + trials: block_1 // automatically preload just the images from block_1 trials +} + +var preload_2 = { + type: 'preload', + trials: block_2 // automatically preload just the images from block_2 trials +} + +jsPsych.init({ + // add each preload trial onto the timeline before the appropriate trial block + timeline: [preload_1, block_1, preload_2, block_2], +}); +``` + +Below is an example with trials where the stimuli files cannot be preloaded automatically, because the stimuli files are passed to the trials via `jsPsych.timelineVariable`. In this case, we create separate arrays for each batch of files, and then pass those arrays to the each preload trial. + +```javascript +// these trial blocks cannot be automatically preloaded because +// the media files are passed to the trial parameters with timeline variables +var block_1 = { + timeline: [...], + timeline_variables: [ + {stim: 'file1.png'}, + {stim: 'file1.png'} + ] +} + +var block_2 = { + timeline: [...], + timeline_variables: [ + {stim: 'file3.png'}, + {stim: 'file4.png'} + ] +} + +var images_block_1 = ['file1.png', 'file2.png']; +var images_block_2 = ['file3.png', 'file4.png']; + +// preload trial for preloading the block 1 stimuli +var preload_1 = { + type: 'preload', + images: images_block_1 +} + +// preload trial for preloading the block 2 stimuli +var preload_2 = { + type: 'preload', + images: images_block_2 +} + +jsPsych.init({ + // add each preload trial to the timeline before the appropriate trial block + timeline: [preload_1, block_1, preload_2, block_2], +}); + +``` + +## Preloading progress bar + +By default, the `preload` plugin will display a progress bar while files are being preloaded. This progress bar represents all files that are being preloaded during the trial, regardless of whether the file is being preloaded automatically via the `auto_preload` or `trials` parameters, or manually via the `audio`, `images`, and `video` parameters. You may wish to turn the preload progress bar off if you are only loading a small number of files, as it will appear and disappear so quickly that the participant may be confused about what it was. You can control whether the preloading progress bar appears by setting the `show_progress_bar` parameter in the `preload` trial. + +```javascript +var preload_trial = { + type: 'preload', + auto_preload: true + show_progress_bar: false // hide progress bar +} +``` + +## Loading time limits + +It's usually a good idea to set a time limit for file loading, to ensure that participants aren't waiting for an unreasonable amount of time. Time limits can be specified in milliseconds using the `max_load_time` parameter. If you set a loading time limit and all files haven't finished loading before this time, then the `preload` trial will either stop an error (if `continue_after_error` is false, the default) or the trial will end and the experiment will continue (if `continue_after_error` is `true`). If `max_load_time` is `null` (the default), then there is no time limit. + +```javascript +var preload_trial = { + type: 'preload', + auto_preload: true + max_load_time: 60000 // 1 minute +} +``` + +## Loading and error messages + +It's possible to specify custom messages to be shown on the page while the media files are loading, and in case of one or more file loading errors. The `message` parameter allows you to customize the loading message using an HML-formatted string. If `show_progress_bar` is `true`, then this message will be shown above the progress bar. + +```javascript +var preload_trial = { + type: 'preload', + auto_preload: true + message: 'Please wait while the experiment loads. This may take a few minutes.', +} +``` + +A preloading error will occur when either (a) one or more files produces a loading error, and/or (b) all files have not finished loading before the `max_load_time` duration. The `error_message` parameter allows you to customize the messsage that's shown on the page in these cases. This message will only be shown if `continue_after_error` is `false` (the default). + +```javascript +var preload_trial = { + type: 'preload', + auto_preload: true, + error_message: 'The experiment failed to load. Please contact the researcher.' +} +``` + +In addition to the `error_message` parameter, it's also possible to show more detailed error messages on the page about any files that failed to load. You can control this with the `show_detailed_errors` parameter. Detailed error messages will appear below the general error message. This only applies if `continue_after_error` is `false` (the default). + +Detailed error messages can be useful when testing and debugging your experiment. If `show_detailed_errors` is `true`, then if one or more loading errors occurs before the `max_load_time` is reached, then the error page will also contian a list of the file(s) that produced an error, along with error information (if there is any). Note that this may not be a complete list, because it will only report any errors that occurred before the `max_load_time` was reached. If there are no file loading errors but preloading hasn't finished before the `max_load_time`, then detailed error message will just tell you that loading timed out. + +```javascript +var preload_trial = { + type: 'preload', + auto_preload: true, + // show details of any file loading errors and/or loading time out + show_detailed_errors: true +} +``` + +## Options for handling errors + +If `continue_after_error` is `true`, then the experiment _will not stop_ if one or more files fails to load. Instead, the trial will end and the experiment will continue. However, the preload trial data will contain a property called `success`, which is whether or not all files were loaded successfully, `timeout`, which is whether or not the files loaded successfully before the `max_load_time`. The preload trial data will also contain lists of any `image`, `audio`, and `video` files that failed to load. This gives you the option to continue the experiment after preloading fails and use the preload trial data decide what to do next. For instance, you may decide to skip the trials that use the stimuli files that failed to load, or try loading the failed files again. Another option is to simply end the experiment when preloading fails, but send the data back to your server so that you have more information about the loading failure. + +```javascript +var preload_trial = { + type: 'preload', + auto_preload: true, + message: 'Please wait while the experiment loads...', + // don't stop the experiment if there are file loading errors or if loading times out + continue_after_error: true +} + +var save_data = { + type: 'call-function', + async: true, + func: function(done){ + var data = jsPsych.data.get().json(); + save_data(data, function() {done()}) + } +} + +// the experiment will stop here, since there are no valid key choices or trial duration +var fail_message = { + type: 'html-keyboard-response', + stimulus: 'The experiment failed to load. Please contact the researcher.', + choices: jsPsych.NO_KEYS, + trial_duration: null +} + +var if_loading_fails = { + timeline: [save_data, fail_message], + conditional_function: function() { + if (jsPsych.data.getLastTrialData()[0].values().success) { + // preloading was successful, so skip this conditional timeline + // and move on with the experiment + return false; + } else { + // preloading failed, so run this conditional timeline: + // save the data to the server and show the fail message + return true; + } + } +} + +// ... rest of experiment + +jsPsych.init({ + timeline: [preload_trial, if_loading_fails, ... ] +}) + +``` + +The `preload` plugin's `on_success` and `on_error` callback functions provide another way of tracking preloading progress and handling file loading errors. These functions are called after any file either loads successfully or produces an error, respectively. These functions receive a single argument, which is the path of the file (string) that loaded or produced an error. + +```javascript +var file_load_count = 0; +var file_error_count = 0; + +var preload_trial = { + type: 'preload', + auto_preload: true, + on_error: function(file) { + file_error_count++; + console.log('Error: ',file); + }, + on_success: function(file) { + file_load_count++; + console.log('Loaded: ',file); + } +}; +``` + +Note that there's no guarantee that any/all files will trigger one of these two callback functions, because they are cancelled after the `preload` trial ends. For instance, if a file takes longer to load then the `max_load_time`, then the `preload` trial will end due to timing out, and the `on_success` and `on_error` callbacks for any in-progress files will be cancelled. \ No newline at end of file diff --git a/docs/overview/plugins.md b/docs/overview/plugins.md new file mode 100644 index 0000000000..786751f4c8 --- /dev/null +++ b/docs/overview/plugins.md @@ -0,0 +1,320 @@ +# Plugins + +In jsPsych, plugins define the kinds of trials or events that should occur during the experiment. Some plugins define very general events, like displaying a set of instructions pages, displaying an image and recording a keyboard response, or playing a sound file and recording a button response. Other plugins are more specific, like those that display particular kinds of stimuli (e.g. Random-Dot Kinematogram, visual search circle), or run a specific version of particular kind of task (e.g. the Implicit Association Test). Creating an experiment with jsPsych involves figuring out which plugins are needed to create the tasks you want your participants to perform. + +Plugins provide a structure for a particular trial or task, but often allow for significant customization and flexibility. For example, the `image-keyboard-response` plugin defines a simple structure for showing an image and collecting a keyboard response. You can specify the what the stimulus is, what keys the subject is allowed to press, and how long the stimulus should be on the screen, how long the subject has to respond, and so on. Many of these options have reasonable default values; even though the image plugin has many different parameters, you only *need* to specify the image stimulus in order to use it. Each plugin has its own documentation page, which describes what the plugin does, what options are available, and what kind of data it collects. + +## Using a plugin + +To use a plugin, you'll need to load the plugin's JavaScript file in your experiment's HTML page. All jsPsych experiments also need to load the "jsPsych.js" file. + +```html + + + + +``` + +Once a plugin is loaded, you can use JavaScript to define a trial that uses that plugin. All jsPsych trials have a `type`, which tells jsPsych what plugin to use to run the trial. The trial's `type` is the plugin name, which usually the same as the plugin file name, but with the "jspsych-" prefix removed. + +The following JavaScript code defines a trial using the `image-keyboard-response` plugin to display an image file. This trial uses the default values for valid keys, stimulus duration, trial duration, and other parameters. + +```javascript +var image_trial = { + type: 'image-keyboard-response', + stimulus: 'images/happy_face.jpg' +} +``` + +You can override any default parameter values by adding them into your trial object. Here's an exampe of overriding the default values for `trial_duration` and `post_trial_gap`: + +```javascript +var image_trial = { + type: 'image-keyboard-response', + stimulus: 'images/happy_face.jpg', + trial_duration: 3000, + post_trial_gap: 2000 +} +``` + +## Parameters available in all plugins + +Each plugin specifies its own set of parameters. Check the documentation for a plugin to see what parameters are available and what they do. + +There is also a set of parameters that can be specified for any plugin: + +| Parameter | Type | Default Value | Description | +| -------------- | -------- | ----------------------- | ---------------------------------------- | +| data | object | *undefined* | An object containing additional data to store for the trial. See [the Data page](../overview/data.md) for more details. | +| post_trial_gap | numeric | null | Sets the time, in milliseconds, between the current trial and the next trial. If null, there will be no gap. | +| on_start | function | `function(){ return; }` | A callback function to execute when the trial begins, before any loading has occurred. See [the Event-Related Callbacks page](../overview/callbacks.md) for more details. | +| on_finish | function | `function(){ return; }` | A callback function to execute when the trial finishes, and before the next trial begins. See [the Event-Related Callbacks page](../overview/callbacks.md) for more details. | +| on_load | function | `function(){ return; }` | A callback function to execute when the trial has loaded, which typically happens after the initial display of the plugin has loaded. See [the Event-Related Callbacks page](../overview/callbacks.md) for more details. | +| css_classes | string | null | A list of CSS classes to add to the jsPsych display element for the duration of this trial. This allows you to create custom formatting rules (CSS classes) that are only applied to specific trials. For more information and examples, see the [Controlling Visual Appearance page](../overview/style.md) and the "css-classes-parameter.html" file in the jsPsych examples folder. | +| save_trial_parameters | object | `{}` | An object containing any trial parameters that should or should not be saved to the trial data. Each key is the name of a trial parameter, and its value should be `true` or `false`, depending on whether or not its value should be saved to the data. If the parameter is a function that returns the parameter value, then the value that is returned will be saved to the data. If the parameter is always expected to be a function (e.g. an event-related callback function), then the function itself will be saved as a string. For more examples, see the "save-trial-parameters.html" file in the jsPsych examples folder. | + +### The data parameter + +The `data` parameter allows you to add additional properties to the trial data. This can be useful for storing properties of the trial that are not directly apparent from the values that the plugin records. The `data` parameter value should be an object that contains key-value pairs. + +A simple example is the [Flanker Task](https://en.wikipedia.org/wiki/Eriksen_flanker_task). In this experiment, participants respond to the direction of a central arrow by pressing a key to the left for a left-pointing arrow (<) and a key to the right for a right-pointing arrow (>). The arrow appears in the center of *flankers*, or arrows that the participant should ignore. Those flankers can be congruent (>>>>>) or incongruent (<<><<). + +A trial for the Flanker Task written with jsPsych might look like this: + +```javascript +var trial = { + type: 'html-keyboard-response', + stimulus: '<<<<<', + choices: ['f','j'], + data: { + stimulus_type: 'congruent', + target_direction: 'left' + } +} +``` + +Note the use of the data parameter to add a property `stimulus_type` with the value `congruent` and a property `target_direction` with the value `left`. Having these properties recorded directly in the data simplifies data analysis, making it easy to aggregate data by `stimulus_type` and/or `target_direction`. + +### The post_trial_gap (ITI) parameter + +The default inter-trial interval (ITI) in jsPsych is 0 ms. This can be adjusted at the experiment-wide level by changing the `default_iti` parameter in `jsPsych.init()`. + +The ITI can also be controlled at the trial level through the `post_trial_gap` parameter. Setting this parameter to a positive integer *x* will cause a blank screen to display after the trial for *x* milliseconds. Setting this parameter for a trial will override the `default_iti` value set in `jsPsych.init`. + +```javascript +var trial = { + type: 'html-keyboard-response', + stimulus: 'There will be a 1.5 second blank screen after this trial.', + post_trial_gap: 1500 +} +``` + +### The on_start parameter + +Immediately before a trial runs, there is an opportunity to run an arbitrary function through the `on_start` event handler. This event handler is passed a single argument containing an *editable* copy of the trial parameters. This function can therefore be used to alter the trial based on the state of the experiment, among other uses. + +```javascript +// when this trial runs, the on_start function will change the trial's stimulus and data parameters, +// so the trial will display an incongruent Flanker stimulus with a right-facing central arrow +var trial = { + type: 'html-keyboard-response', + stimulus: '<<<<<', + choices: ['f','j'], + data: { + stimulus_type: 'congruent', + target_direction: 'left' + }, + on_start: function(trial){ + trial.stimulus = '<<><<'; + trial.data.stimulus_type = 'incongruent'; + trial.data.target_direction = 'right'; + } +} +``` + +### The on_finish parameter + +After a trial is completed, there is an opportunity to run an arbitrary function through the `on_finish` event handler. This function is passed a single argument containing an *editable* copy of the data recorded for that trial. This function can therefore be used to update the state of the experiment based on the data collected, or modify the data collected. + +The `on_finish` function can be useful to calculate new data properties that were unknowable at the start of the trial. For example, with the Flanker Task example above, the `on_finish` function could check the response and use to this information to add a new property to the data called `correct`, which is either `true` or `false`. + +```javascript +// in addition to all of the standard data collected for this trial, +// this on_finish function adds a property called 'correct' +// which is either 'true' or 'false' +// depending on the response that was made +var trial = { + type: 'html-keyboard-response', + stimulus: '<<<<<', + choices: ['f','j'], + data: { + stimulus_type: 'congruent', + target_direction: 'left', + correct_response: 'f' + }, + on_finish: function(data){ + if(jsPsych.pluginAPI.compareKeys(data.response, data.correct_response)){ + data.correct = true; + } else { + data.correct = false; + } + } +} +``` + +### The on_load parameter + +The `on_load` callback function will trigger once the trial has completed loading. For most plugins, this will occur once the display has been initially updated but before any user interactions or timed events (e.g., animations) have occurred. This can be useful for changing various aspects of the page elements and their properties that would otherwise require modifying the plugin file. + +```javascript +var trial = { + type: 'image-keyboard-response', + stimulus: 'imgA.png', + on_load: function() { + // this will change the src attribute of the image after 500ms + setTimeout(function(){ + document.querySelector('img').src = 'imgB.png' + }, 500); + } +}; +``` + +### The css_classes parameter + +The `css_classes` parameter allows you to add an array of CSS class names to the jsPsych display element on that specific trial. This allows you to create custom style and formatting rules that are only applied to specific trials. If you want CSS rules that only apply to specific elements during a trial, you can use additional CSS selectors. + +```html + + +``` + +### The save_trial_parameters parameter + +The `save_trial_parameters` parameter allows you to tell jsPsych what parameters you want to be saved to the data. This can be used to override the parameter values that the plugin saves by default. You can add more parameter values to the data that are not normally saved, or remove parameter values that normally are saved. This can be especially useful when the parameter value is dynamic (i.e. a function) and you want to record the value that was used during the trial. + +```javascript +var trial = { + type: 'html-button-response', + stimulus: '

BLUE

', + choices: function() { + return jsPsych.randomization.shuffle(['Yes','No']); + }, + post_trial_gap: function() { + return jsPsych.randomization.sampleWithoutReplacement([200,300,400,500],1)[0]; + }, + save_trial_parameters: { + // save the randomly-selected button order and post trial gap duration to the trial data + choices: true, + post_trial_gap: true, + // don't save the stimulus + stimulus: false + } +} +``` + +!!! note + You cannot remove the `internal_node_id` and `trial_index` values from the trial data, because these are used internally by jsPsych. + +## Data collected by all plugins + +Each plugin defines what data is collected on the trial. The documentation for each plugin specifies what information will be stored in the trial data. + +In addition to the data collected by a plugin, there is a default set of data that is collected on every trial. + +| Name | Type | Value | +| ---------------- | ------- | ---------------------------------------- | +| trial_type | string | The name of the plugin used to run the trial. | +| trial_index | numeric | The index of the current trial across the whole experiment. | +| time_elapsed | numeric | The number of milliseconds between the start of the experiment and when the trial ended. | +| internal_node_id | string | A string identifier for the current TimelineNode. | + +## Creating a new plugin + +You can add new kinds of tasks to jsPsych by creating new plugins, or modifying existing plugins. A task can be virtually any kind of activity. If it can be implemented in JavaScript, then it almost certainly can be turned into a jsPsych plugin. + +### What's in a plugin file? + +Plugin files follow a specific template. Adherence to the template is what allows jsPsych to run a plugin without knowing anything about what the plugin is doing. What makes plugins so flexible is that the template imposes very few requirements on the code. Here's what an empty plugin template looks like: + +```js +jsPsych.plugins['plugin-name'] = (function(){ + + var plugin = {}; + + plugin.info = { + name: 'plugin-name', + parameters: { + } + } + + plugin.trial = function(display_element, trial){ + jsPsych.finishTrial(); + } + + return plugin; + +})(); +``` + +This plugin will work! It defines a plugin called 'plugin-name', and it does absolutely nothing. However, it won't break the experiment, and jsPsych will understand that this is a valid plugin. + +Let's examine it in more detail. + +The overall structure of the plugin is defined using a module JavaScript design pattern. This pattern uses a technique called an anonymous closure. This is why the first line has `(function(){` and the last line is `})();`. The details aren't important, but if you want to learn more about it, [this is a nice overview](http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html). The reason this pattern is useful is because it allows for persistent state and private scope. In other words, the plugin is isolated and can't be altered by other plugins. + +The module, created by the `(function(){` `})();` expressions, contains an object called `plugin`. The `plugin` object has two properties: `info` and `trial`. The `plugin` object is returned at the end of the module, which is what assigns the defined properties of `plugin` to `jsPsych['plugin-name']`. + +#### plugin.info + +The plugin's `info` property is an object that contains all of the available parameters for the plugin. Each parameter name is a property, and the value is an object that includes a description of the parameter, the value's type (string, integer, etc.), and the default value. See some of the plugin files in the jsPsych plugins folder for examples. + +jsPsych allows most [plugin parameters to be dynamic](dynamic-parameters.md), which means that the parameter value can be a function that will be evaluated right before the trial starts. However, if you want your plugin to have a parameter that is a function that _shouldn't_ be evaluated before the trial starts, then you should make sure that the parameter type is `'FUNCTION'`. This tells jsPsych not to evaluate the function as it normally does for dynamic parameters. See the `canvas-*` plugins for examples. + +#### plugin.trial + +The plugin's `trial` property is a function that runs a single trial. There are two parameters that are passed into the trial method. The first, `display_element`, is the DOM element where jsPsych content is being rendered. This parameter will be an `HTMLElement`. Generally, you don't need to worry about this parameter being in the correct format, and can assume that it is an `HMTLElement` and use methods of that class. The second, `trial`, is an object containing all of the parameters specified in the corresponding TimelineNode. If you have specified all of your parameters in `plugin.info`, along with default values for each one, then the `trial` object will contain the default values for any parameters that were not specified in the trial's definition. + +The only requirement for the `trial` method is that it calls `jsPsych.finishTrial()` when it is done. This is how jsPsych knows to advance to the next trial in the experiment (or end the experiment if it is the last trial). The plugin can do whatever it needs to do before that point. + +Of course, there are other things that you will probably want the plugin to do inside the `plugin.trial` function, besides just end. Here are some examples: + +### Changing the content of the display + +There are a few ways to change the content of the display. The `display_element` parameter of the trial method contains the DOM element for displaying content, so you can use various JavaScript methods for interaction with the display element. A common one is to change the `innerHTML`. + +```javascript +var html_content = '

This is the first paragraph

'; +html_content += '

This is the second paragraph

'; + +display_element.innerHTML = html_content; +``` + +jsPsych doesn't clear the display before or after each trial, so it is often appropriate to use `innerHTML` to clear the display at the end of a trial: + +```javascript +// clear the display +display_element.innerHTML = ''; +``` + +### Writing data + +Plugins exist to collect data, so saving data is obviously a crucial thing to do. You can pass an object of data as the parameter to `jsPsych.finishTrial()`: + +```javascript +var data = { + correct: true, + rt: 350 +} + +jsPsych.finishTrial(data); +``` + +The data recorded will be that `correct` is `true` and that `rt` is `350`. Additional data for the trial will also be collected automatically by the jsPsych library. + +### The plugin template + +An empty plugin template is included in the `plugins/template` folder. \ No newline at end of file diff --git a/docs/overview/progress-bar.md b/docs/overview/progress-bar.md index 9bbcf8b1b3..ea3eb81725 100644 --- a/docs/overview/progress-bar.md +++ b/docs/overview/progress-bar.md @@ -29,7 +29,7 @@ var trial = { } ``` -You can also get the current value of the progress bar with `jsPsych.getProgressBarCompleted()` +You can also get the current value of the progress bar with `jsPsych.getProgressBarCompleted()`. ```js var proportion_complete = jsPsych.getProgressBarCompleted(); @@ -44,3 +44,67 @@ jsPsych.init({ auto_update_progress_bar: false }); ``` + +Here's a complete example showing how to use these functions and `jsPsych.init()` settings to manually update the progress bar: + +```js +var n_trials = 5; + +var start = { + type: 'html-keyboard-response', + stimulus: 'Press any key to start!', + on_start: function() { + // set progress bar to 0 at the start of experiment + jsPsych.setProgressBar(0); + } +}; + +var trial = { + type: 'html-keyboard-response', + stimulus: 'This is a trial!', + on_finish: function() { + // at the end of each trial, update the progress bar + // based on the current value and the proportion to update for each trial + var curr_progress_bar_value = jsPsych.getProgressBarCompleted(); + jsPsych.setProgressBar(curr_progress_bar_value + (1/n_trials)); + } +}; + +var trials = { + timeline: [trial], + repetitions: n_trials +}; + +var done = { + type: 'html-keyboard-response', + stimulus: 'Done!' +}; + +jsPsych.init({ + timeline: [start, trials, done], + show_progress_bar: true, + auto_update_progress_bar: false +}); +``` + +## Custom Text + +By default, jsPsych adds the text "Completion Progress" to the left of the progress bar. You can specify custom text using the `message_progress_bar` parameter in `jsPsych.init`. + +```js +// support for different spoken languages +jsPsych.init({ + timeline: [...], + show_progress_bar: true, + message_progress_bar: 'Porcentaje completo' +}); +``` + +```js +// no message +jsPsych.init({ + timeline: [...], + show_progress_bar: true, + message_progress_bar: '' +}); +``` \ No newline at end of file diff --git a/docs/overview/prolific.md b/docs/overview/prolific.md new file mode 100644 index 0000000000..dd9fe36639 --- /dev/null +++ b/docs/overview/prolific.md @@ -0,0 +1,78 @@ +# Intergrating with Prolific + +[Prolific](https://www.prolific.co/?ref=5JCXZPVU) is a participant recruitment service aimed at research. Integrating a jsPsych experiment with Prolific requires capturing the participant's ID and sending the participant to a completion URL at the end of the experiment. + +## Capturing the Participant ID, Study ID, and Session ID + +When creating a study on Prolific you must provide the URL to your study. You can host your jsPsych experiment however you'd like - some options are discussed in the [Running Experiments](/overview/running-experiments/#hosting-the-experiment-and-saving-the-data) documentation page. Once you've got a URL to your experiment, you can enter that in the *study link* section of Prolific. Then, click the option to record Prolific IDs via URL parameters. + +![Prolific screenshot](/img/prolific-study-link.png) + +This will append information about the participant's prolific ID (`PROLIFIC_PID`), the study's ID (`STUDY_ID`), and the session ID (`SESSION_ID`) to the URL that participants use to access your experiment. + +We can capture these variables with jsPsych, and add them to jsPsych's data. This can be done anywhere in your code. This code does not need to run as part of your experiment timeline. + +```html + +``` + +## Completing the Experiment + +When the experiment is complete, Prolific requires that you send the participant to a specific URL that marks the session as complete on Prolific's server. The link is provided to you by Prolific in the *study completion* section of the setup. + +![Prolific Study Completion Screenshot](/img/prolific-study-completion.png) + +You can accomplish this in a couple different ways. + +!!! warning + It's important that you've saved all the data from your experiment before the participant returns to Prolific. Make sure that any server communication has completed prior to redirecting the participant. One way to do this is by using the async features of the `call-function` plugin ([example](/plugins/jspsych-call-function/#async-function-call)). + +### Participant clicks a link + +One option is to create a trial that contains a link that the participant clicks to end the experiment and return to Prolific. For example, the `html-keyboard-response` plugin can be used to display text that includes a link. This could go on a debriefing page. + +Here's an example trial that could be used. Note that `choices` is set to `jsPsych.NO_KEYS`, which will prevent the participant from continuing past this point in the experiment. + +```js +var final_trial = { + type: 'html-keyboard-response', + stimulus: `

You've finished the last task. Thanks for participating!

+

Click here to return to Prolific and complete the study.

`, + choices: jsPsych.NO_KEYS +} +``` + +### Automatically redirect + +A second option is to automatically redirect the participant to the completion URL when the experiment is finished. You could do this in a number of places in the jsPsych timeline. + +Here's an example using the `on_finish` event for the entire experiment. + +```js +jsPsych.init({ + timeline: [...], + on_finish: function(){ + window.location = "https://app.prolific.co/submissions/complete?cc=XXXXXXX" + } +}); +``` + + diff --git a/docs/overview/running-experiments.md b/docs/overview/running-experiments.md new file mode 100644 index 0000000000..49586f7393 --- /dev/null +++ b/docs/overview/running-experiments.md @@ -0,0 +1,98 @@ +# Running Experiments + +You can run your jsPsych experiment: + +**Offline**, by opening the HTML file directly in the browser using the `file://` protocol. + +**Online**, by hosting the files on a web server using the `http://` or `https://` protocol. + +The way that you run your experiment will have consequences for certain aspects about how the experiment works, and what your experiment will be able to do. This page explains what you need to know about both of these options. + +!!! info + If you are looking for a tool to automate deployment-related tasks, check out the [jsPsych Builder](https://github.com/bjoluc/jspsych-builder) CLI utility. + It automatically bundles scripts and style sheets, configures media preloading, and yields a zip file that contains all files for deployment (online or offline). + jsPsych Builder can also directly build JATOS experiment files (.jzip) that you can upload to a JATOS server (see [this section](#hosting-the-experiment-and-saving-the-data) below for more info about JATOS and other server options). + +## Offline + +You can run your jsPsych experiment offline by opening the HTML file directly in a web browser, for instance by double-clicking on it. This uses the `file://` protocol. It's usually the fastest and easiest way to run through an experiment, and is very useful while writing and testing the code. + +At some point you will need to move your experiment files onto a server and send the data to a database, since this is how you will ultimately collect the data (unless you're planning to collect data on your local computer). There are some important differences between the way the experiment runs offline compared to online via a web server. + +Note that, unless noted, here we're using the word "server" to mean either a _local_ server (which runs on your computer and only makes the experiment files available from within that computer, and is often used during development), or a _remote_ server (which does not run on your computer and does share your experiment files over the internet). + +### Cross-origin requests (CORS) and safe mode + +Web browsers have a security policy called [cross-origin resource sharing (CORS)](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) that determines whether the webpage can request files that come from a different origin (i.e. protocol, host/domain, and port). This isn't a problem when your study runs _online_, because in that case your experiment files all have the same origin. However, when you run your experiment _offline_, the CORS policy blocks some jsPsych features that require [loading local files](https://security.stackexchange.com/questions/190266/why-chrome-blocks-ajax-locally/190321#190321). If your experiment uses these features, then [CORS errors](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors) will prevent the experiment from running. + +To prevent these errors, jsPsych uses a 'safe mode' when it detects that the HTML page is running via the `file://` protocol, and if so, automatically disables the features that don't work in that context. Specifically, when a jsPsych experiment runs offline: + +* **Web Audio is disabled** (even if `use_webaudio` is set to `true` in `jsPsych.init`). The WebAudio API option is used by default because it allows more precise measurement of response times relative to the onset of the audio. But because WebAudio doesn't work offline, audio will be played using HTML5 audio instead. This is equivalent to setting `use_webaudio` to `false` in `jsPsych.init`. +* **Video preloading is disabled** (both automatic and manual preloading via the `preload` plugin). Videos will still play when you run your experiment offline, but they will load _during_ the experiment, which might cause noticeable delays before video playback starts. + +This safe mode feature is controlled by the `override_safe_mode` parameter in [`jsPsych.init`](../core_library/jspsych-core.md#jspsychinit), which defaults to `false`. If you leave this setting as the default, then you won't need to worry about CORS errors while running your experiment offline, or remembering to change your `jsPsych.init` settings when you move the experiment online. + +It's possible to override jsPsych's safe mode by setting `override_safe_mode` to `true` in `jsPsych.init`. One reason you might do this is if you've disabled web security features in your browser (see [here](https://alfilatov.com/posts/run-chrome-without-cors/) and [here](https://stackoverflow.com/questions/4819060/allow-google-chrome-to-use-xmlhttprequest-to-load-a-url-from-a-local-file) for instructions in Chrome), which is safe to do if you know what you're doing. If your experiment does not use Web Audio or preloaded videos, then jsPsych's safe mode feature will not have any effect. + +The `override_safe_mode` parameter also has no effect when your experiment is running online a web server, because the page will be loaded via the `http://` or `https://` protocol. + +### Media loading + +While running your experiment offline, any media files are likely to load very quickly because they are stored on your own computer's disk. Therefore you may not notice problems with file loading delays while running your experiment locally (either offline or on a _local_ server) because the files will load fast enough that they never cause disruption. However, when your experiment is hosted on a _remote_ server, the files will need to be transferred over the internet, which means they will take longer to load - in some cases much longer. Loading delays are most noticeable with media files: images, audio, and video. As explained on the [Media Preloading](media-preloading.md) page, loading delays during your experiment can cause problems for stimulus display and response times. + +It is important to test your experiment to ensure that any media files are preloading successfully and not being requested again during the experiment. You can use the Network tab in your browser's developer tools to see when files are loaded and to simulate a slow internet connection (see [here](https://developers.google.com/web/tools/chrome-devtools/network) for Chrome Network tab documentation). If you are preloading many and/or large files, such as videos, you may want to increase the `max_load_time` parameter in [`the preload plugin`](../plugins/jspsych-preload.md) so that participants with slow/unreliable internet connections will be able to take part in your experiment. + +### Permanent data storage + +As explained in the [Data Storage, Aggregation, and Manipulation](data.md#data-in-jspsych-permanent-and-non-permanent-data) page, jsPsych stores information in the participant's browser. While running an experiment offline, you won't be able to send the data to a database. However you can still see the data that jsPsych collects by saving it as a local file (using [`jsPsych.data.get().localSave`](../core_library/jspsych-data.md#localsave)), displaying it in the webpage at the end of the experiment (using [`jsPsych.data.displayData`](../core_library/jspsych-data.md#jspsychdatadisplaydata)), or printing it to the browser's console (using [`console.log`](https://www.w3schools.com/jsref/met_console_log.asp)). + +Permanent data storage is also necessary when the code that runs the experiment depends on information that can't be known in advance, and that changes throughout data collection. Some common examples of this in cognitive behavioral research are **version counterbalancing**, where the experiment code needs to access and update the history of version assignment in order to determine which version should be assigned, and **multi-session/training studies**, where the experiment might need to access and update information about each participant like their current session number, task difficulty level, etc. + +Doing these things in an automated way requires the use of a server. While developing and testing your experiment offline, you might choose to simulate some of these things and then implement them properly once you move your experiment online. For instance, you could [randomize](../core_library/jspsych-randomization.md#jspsychrandomizationsamplewithoutreplacement) instead of counterbalancing version assignment: + +```js +var versions = [1,2]; +var random_version = jsPsych.randomization.sampleWithoutReplacement(versions,1)[0]; +``` + +And use [URL query parameters](../core_library/jspsych-data.md#jspsychdatageturlvariable) to pass in variables like session number and difficulty level: + +```js +// add the variables onto the end of the URL that appears in the browser when you open the file +// e.g., file:///C:/my_experiment.html?id=1&sess=2&diff=3 +var participant_id = jsPsych.data.getURLVariable('id'); +var session = jsPsych.data.getURLVariable('sess'); +var difficulty = jsPsych.data.getURLVariable('diff'); +``` + + + + + +## Online + +### Hosting the Experiment and Saving the Data + +jsPsych is a front-end JavaScript library that runs entirely on the participant's computer. To run a jsPsych experiment over the internet, the files need to be hosted on a public web server so that participants can access the experiment using a web browser. When the participant completes the experiment in the browser, all of the data that jsPsych collects is stored on the participant's computer in the browser's memory. To get access to this data, it needs to be sent from the participant's browser back to the web server and stored in a database or a file. + +To be maximally flexible, jsPsych doesn't provide a single built-in solution for the web server component of your experiment. This makes jsPsych compatible with a wide range of hosting services and tools, allowing researchers to choose the web server option that best suit their needs. + +Some options for running your jsPsych experiment online include: + +* [Cognition.run](https://www.cognition.run/) - A free service designed specifically for hosting jsPsych experiments, with an easy-to-use interface. +* [JATOS](https://www.jatos.org/Whats-JATOS.html) - A free program that runs on your own server and provides a GUI for setting up experiments and accessing the data. Offers lots of features for creating more complex experiments and managing multiple researchers. +* [Pavlovia](https://pavlovia.org/) - A paid hosting service for web-based experiments, run by the PsychoPy team. Experiment files are managed on a GitLab repository. Participants will access the experiment through a link to Pavlovia. +* [PsiTurk](https://psiturk.org/) - Python-based program to help you host your experiment on your own computer and collect data from MTurk (see Recruiting Participants below). Relatively easy for a DIY option. +* [Pushkin](https://languagelearninglab.gitbook.io/pushkin/) - A set of tools to help you set up your own virtual laboratory for online experiments. This option differs from the others in that it helps you set up a complete website that may contain many different experiments, information about the laboratory, participant logins, and other features that are targeted at hosting large-scale data collection efforts. +* Full DIY - You can setup your own web server and database and handle the communication yourself. Traditional web server 'stacks' include [LAMP](https://www.digitalocean.com/community/tutorial_collections/how-to-install-lamp)/[LEMP](https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-ubuntu-20-04) (Linux operating system, Apache or Nginx server application, MySQL database, and PHP programming language). Other common web server frameworks include [Flask](https://flask.palletsprojects.com/) (Python) and [Node.js](https://nodejs.org/) (JavaScript). + +### Recruiting Participants + +Once your experiment is running online, you could recruit participants in the same way that you would for lab-based studies. For instance, if your institution uses SONA, you can advertise your web-based study link on SONA. SONA allows you to automactically embed a unique ID in online study URLs, which you can then save in your data using [jsPsych's URL query parameters function](../core_library/jspsych-data.md#jspsychdatageturlvariable). SONA will also generate a completion URL that you can redirect participants to at the end of the study, and this will mark them as having completed the study in SONA. + +To take full advantage of hosting an experiment online, many researchers advertise their experiments more widely. Social media and other media outlets provide one option for reaching a large number of potential participants. There are also some commercial platforms that you can use to advertise your study and pay anonymous online participants. These recruitment platforms charge a fee for use. The advantages of these platforms are that they handle the participant payments and allow you to specify pre-screening criteria. The most commonly used recruitment platforms in online behavioral research are: + +* [Prolific](https://www.prolific.co/): An online labor market designed specifically for web-based research. +* [Amazon Mechanical Turk (MTurk)](https://www.mturk.com/): An online labor market designed for advertising paid 'human intelligence tasks'. This service was designed for use by commercial businesses but has been used by behavioral researchers for many years. + +Like SONA, Prolific and MTurk use URL query parameters to get participant information, and redirection to specific URLs to mark participants as having finished the study. jsPsych includes [convenience functions for interacting with MTurk participants](../core_library/jspsych-turk.md). Information about integrating with Prolific can be found in the researcher support section of their website. diff --git a/docs/overview/style.md b/docs/overview/style.md new file mode 100644 index 0000000000..9c432825ff --- /dev/null +++ b/docs/overview/style.md @@ -0,0 +1,293 @@ +# Controlling Visual Appearance + +Your experiment's style and formatting comes from the CSS (cascading style sheet) rules that are stored in the jspsych.css file, and the browser's defaults. There are a few ways to change the style and formatting in your experiment. The method that you choose is partly a matter of personal preference. It might also depend on whether you want the style/formatting change(s) to apply to _specific trials_, to _the whole experiment_ (HTML page), or across _different experiments_. This section discusses the different ways of incorporating CSS into your jsPsych experiment. You can also see [this page about adding CSS to web pages](https://www.w3schools.com/html/html_css.asp) to learn more. + +## Inline CSS + +Whenever you're using a parameter that accepts an HTML-formatted string, you have the option to include inline CSS. Inline CSS is a way of adding style and formatting directly into a specific HTML element using its ["style" attribute](https://www.w3schools.com/tags/att_style.asp). This is a good option for when you want to make few and/or simple style changes to an HTML trial parameter. + +To change an element's style using inline CSS, you can set the element's "style" attribute to a string that contains the CSS parameters that you want to change, along with the values that you want to use. The syntax is " <parameter-name> : <parameter-value> ;". + +In the example below, the stimulus font size is set to 30px and the text color is set to red. These changes will _only_ be applied to this stimulus text in this specific trial. + +```javascript +var trial = { + type: 'html-keyboard-response', + stimulus: '

hello world!

' +} +``` + +You can also use a [dynamic parameter](/overview/dynamic-parameters) to combine inline CSS and trial-specific variables. This allows you to easily apply the same inline CSS to multiple trials. Here's an example using a dynamic stimulus parameter and [timeline variables](/overview/timeline/#timeline-variables): + +```javascript +var trial = { + type: 'html-keyboard-response', + stimulus: function() { + var stim = '

'+jsPsych.timelineVariable('text')+'

'; + return stim; + } +} +var trial_procedure = { + timeline: [trial], + timeline_variables: [ + {text: 'Welcome'}, + {text: 'to'}, + {text: 'the'}, + {text: 'experiment!'} + ] +} +``` + +## Adding CSS rules + +You may want to add a lot of different CSS changes to your experiment, re-use the same change(s) across lots of different trials, and/or separate the style/formatting from the HTML string. In these cases, you might find it useful to create CSS rules rather than using inline CSS. + +Creating CSS rules is a lot like using inline CSS, except that you also need to use a [CSS selector](https://www.w3schools.com/css/css_selectors.asp). This is because your CSS rules aren't attached to any specific HTML element (unlike inline CSS), so you need to tell the browser which element(s) the style rules should apply to. The syntax is "css-selector { <parameter-name> : <parameter-value> ; }". + +In the example below, the CSS selector "p" tells the browser to apply the font size change to any text that is inside of a <p> element. + +```css +p { + font-size: 30px; +} +``` + +You can make more specific changes using CSS rules. The specificity will depend on the CSS selectors that are used. In addition to using the [tag name](https://www.w3schools.com/cssref/sel_element.asp) (e.g. "p"), other common CSS selectors include the element's [ID](https://www.w3schools.com/html/html_id.asp) or [class](https://www.w3schools.com/html/html_classes.asp). If you are selecting an element using it's ID, then the CSS selector needs to have a \# in front of the ID, e.g. "\#stimulus". If you are selecting elements based on their class, then you need to include a . in front of the class, e.g. ".large-text". + +In the example below, the "#stimulus" CSS selector means that the width change will only affect elements with the "stimulus" ID, and the ".large-text" CSS selector means that the font size change will only affect elements that have the "large-text" class. + +```css +#stimulus + width: 300px; +} +.large-text { + font-size: 200%; +} +``` + +It is possible to create even more specific CSS selectors, for instance by combining tags, IDs, and/or classes. For example, let's say that you are showing feedback text to participants, and that this text is inside of a <p> tag. You could add the ID "correct" to the <p> element for correct response feedback, and the ID "incorrect" to the <p> element for incorrect response feedback. Then you can define separate styles for correct and incorrect feedback text like this: + +```css +p#incorrect { + color: red; +} +p#correct { + color: green; +} +``` + +See [this page about CSS selectors](https://www.w3schools.com/cssref/css_selectors.asp) for a complete reference of CSS selector patterns and their meanings. + +### With style tags + +You can add CSS rules to your HTML page by putting them inside of <style> tags. These rules will be applied to your _whole experiment_. This method can be useful for making general changes to the way that your experiment looks. + +In the example below, the default font size is set to 25px throughout the experiment. This will overrule the default font size of 18px that is set in the jspsych.css file. + +```html + + + + + + +``` + +### With a stylesheet + +CSS rules can also be applied to your experiment with a link to an external CSS file. This is the same method that is usually used to apply the style from jspsych.css to an experiment. These rules will be applied to your _whole experiment_. You may find it useful to use a custom stylesheet when you want to re-use the same CSS rules across _multiple experiments_ (HTML files). + +This example shows how to add a custom CSS file in addition to the styles provided in jspsych.css: + +```html + + + + + + +``` + +Below are the some example contents of an external CSS file, like the "my_experiment_style.css" from the example above. This CSS will (1) change the page background color to black, (2) change the default font to 25px and white, and (3) limit the width of the page content so that it can only take up to 80% of its normal width. + +```css +body { + background-color: black; +} +.jspsych-display-element { + font-size: 25px; + color: white; +} +.jspsych-content { + max-width: 80%; +} +``` + +!!! note + <style> tags are not used inside of an external CSS file. + +### Using the css_classes trial parameter + +CSS rules can also be applied in a trial-specific way using the `css_classes` parameter. This parameter will apply one or more class to the <div> element that holds all of the jsPsych page content during that specific trial. This way you can treat CSS styles just like any other trial parameter. + +You can use a static `css_classes` parameter value if you always want to apply the same CSS rules to the trial. In the 'fixation' example below, separating the style rules from the `stimulus` string makes the code a little bit 'cleaner', and this makes it easier to re-use the same style rules in other parts of the experiment. + +```html + + + + + + + +``` + +You may want the `css_classes` parameter to vary across trials. If so, you can turn it into a [dynamic parameter](/overview/dynamic-parameters) or use [timeline variables](/overview/timeline/#timeline-variables) (see examples below). + +One thing to note about the `css_classes` parameter is that it only adds the class(es) to the jspsych-content <div> element, which is the "parent" element that contains all of the experiment content. Often you'll want your CSS rules to be applied to other elements _inside_ of this jspsych-content div. Sometimes your CSS rules will be "inherited" by all of the other jsPsych content inside of this parent <div>. For instance, in the `fixation` example above, the CSS rules that change the font size, weight and color are applied to the parent <div> and automatically passed on to the stimulus text through inheritance. + +There are two reasons why a CSS rule like the one above for `fixation` may not work the way you expect: + +1. Not all CSS properties are inherited from the parent element. + +2. When a CSS property is inherited from the parent element, it will affect _all_ elements in a given trial. + +In these cases, you can change your CSS selector to make it more specific: add a space after class name, then add _more CSS selectors_ to select the specific element(s) that you want to change. + +!!! tip + To find out this more specific CSS selector, you can right-click on the element and select *Inspect*. In the inspector window, right-click on the property corresponding to the element and copy the selector. You might need to increase the trial's `trial_duration` to give yourself enough time to inspect the elements on the page. See more [tips for working with CSS](#tips-for-working-with-css) below. + +In the example below, the CSS selector `.left-align #stimulus` selects the element with the ID "stimulus" that is _inside_ of an element with the class "left-align". + +```html + + + + + + + +``` + +It's also possible to pass multiple class names to the `css_classes` parameter. This can be useful for creating conditions that involve crossing different style-related factors. This example shows you to combine two text alignment and two text color factors to produce four different stimulus conditions: + +```html + + + + + + + +``` + +See the "css-classes-parameter.html" file in jsPsych's examples subfolder for more explanation and examples. + +## Tips for working with CSS + +Your browser's developer tools contain very useful features for exploring and debugging your experiment's style and formatting. Open your browser's developer tools and click on the Element Inspector button or go to the Elements tab. Once you have selected an element on the page, you can see all of the information that can be used to select it, including: + +1. tag name, e.g., "div", "p", "img", "button" +2. ID, if it has one +3. class(es), if it has any + +You can then use this information to create a CSS selector to modify that element's style. + +![devtools-element-inspector](../img/devtools-inspect-element.png) + +As you can see, jsPsych adds its own IDs and classes to many elements. You can use the developer tools to determine what IDs and classes already exist for the elements that you want to modify, as you may can often just use these instead of adding your own. For instance, in the "html-keyboard-response" plugin, the stimulus will always be shown in a <div> with the ID "jspsych-html-keyboard-response-stimulus". So you can create a CSS rule that is applied to all "html-keyboard-response" stimuli like this: + +```css +#jspsych-html-keyboard-response-stimulus { + color: white; + background-color: blue; + width: 100px; + border: 4px solid black; +} +``` + +As another example, most jsPsych buttons have the class "jspsych-btn", so you can use this class to change the default button styling: + +```css +.jspsych-btn { + padding: 20px 20px; + font-size: 25px; + border-color: black; +} +``` + +You can also use the developer tools to change an element's CSS and immediately see the effect that the changes will have on the page. These changes are just temporary, so you will still need to use one of methods described above to add the CSS changes to your experiment. However, making changes in the developer tools is very useful for figuring out which CSS properties to change and which values to use. This area of the developer tools also shows what styles are currently applied to the element and where those style rules are coming from. + + + +![devtools-change-css](../img/devtools-change-css.png) + + + +There are a few things to be aware of while debugging problems with CSS. + +1. When there are conflicting CSS rules, *some CSS rules will take precedence over others*. For instance, inline CSS usually takes precedence over other CSS rules, and more specific CSS selectors usually take precedence over less specific ones. +2. When there are conflicting CSS rules that have the same level of precedence, *the last rule will override any earlier rules*. For that reason it's important to add your own custom stylesheet _after_ the default jspsych.css stylesheet. See [this page about CSS precedence](https://www.w3schools.com/css/css_specificity.asp) for more information. + +If one CSS style rule is overridden by another one, the rule that is overridden will appear in strikethrough text in the element's "Styles" section. Also, if you are using an incorrect CSS property name or an invalid value, then that will show up here as an error, indicated by both strikethrough text and a little yellow warning symbol. + + + +![devtools-css-error](../img/devtools-css-errors.png) + diff --git a/docs/overview/timeline.md b/docs/overview/timeline.md index 095e877375..16cf1022c2 100644 --- a/docs/overview/timeline.md +++ b/docs/overview/timeline.md @@ -130,7 +130,8 @@ In the above version, there are four separate trials defined in the `timeline_va What if we wanted the stimuli to be a little more complex, with a name displayed below each face? And let's add an additional step where the name is displayed prior to the face appearing. (Maybe this is one condition of an experiment investigating whether the order of name-face or face-name affects retention.) -To do this, we will need to use the `jsPsych.timelineVariable()` method in a slightly different way. Instead of using it as the parameter, we are going to create a dynamic parameter using a function and place the call to `jsPsych.timelineVariable()` inside this function. This will allow us to create an HTML string that has both the image and the name. Note that there is a subtle syntax difference: there is an extra parameter when `jsPsych.timelineVariable()` is called within a function. This `true` value causes the `jsPsych.timelineVariable()` to immediately return the value of the timeline variable. In a normal context, the function `jsPsych.timelineVariable()` returns a function. This is why `jsPsych.timelineVariable()` can be used directly as a parameter even though the parameter is dynamic. +This time, instead of using `jsPsych.timelineVariable()` as the stimulus parameter value, we are going to create a dynamic parameter (function), and place the call to `jsPsych.timelineVariable()` inside this function. This will allow us to create a parameter value that combines multiple bits of information, such as one or more of the values that change across trials (which come from the `timeline_variables` array), and/or anything that doesn't change across trials. In this example, we'll need to switch to using the "html-keyboard-response" plugin so that we can define the stimulus as a custom HTML string that contains an image and text (instead of just an image file). The value of the stimulus parameter will be a function that returns an HTML string that contains both the image and the name. +(Note: in previous versions of jsPsych, there's an extra `true` parameter that you must add when calling `jsPsych.timelineVariable()` from inside a function. As of jsPsych v6.3, `jsPsych.timelineVariable()` automatically detects the context in which it's called, so this additional `true` parameter is not required.) ```javascript @@ -151,8 +152,8 @@ var face_name_procedure = { { type: 'html-keyboard-response', stimulus: function(){ - var html=""; - html += "

"+jsPsych.timelineVariable('name', true)+"

"; + var html=""; + html += "

"+jsPsych.timelineVariable('name')+"

"; return html; }, choices: jsPsych.NO_KEYS, @@ -184,24 +185,6 @@ var face_name_procedure = { } ``` -### Repeating trials - -If we want to repeat the set of trials multiple times, then we can set `repetitions` to an integer. If `randomize_order` is also `true`, the order will re-randomize before every repetition. - -```javascript -var face_name_procedure = { - // timeline parameter hidden to save space ... - timeline_variables: [ - { face: 'person-1.jpg', name: 'Alex' }, - { face: 'person-2.jpg', name: 'Beth' }, - { face: 'person-3.jpg', name: 'Chad' }, - { face: 'person-4.jpg', name: 'Dave' } - ], - randomize_order: true, - repetitions: 3 -} -``` - ### Sampling methods There are also a set of sampling methods that can be used to select a set of trials from the timeline_variables. Sampling is declared by creating a `sample` parameter. The `sample` parameter is given an object of arguments. The `type` parameter in this object controls the type of sampling that is done. Valid values for `type` are @@ -324,9 +307,41 @@ var face_name_procedure = { } ``` +## Repeating a set of trials + +To repeat a timeline multiple times, you can create an object (node) that contains a `timeline`, which is the timeline array to repeat, and `repetitions`, which is the number of times to repeat that timeline. + +```javascript +var trial = { + type: 'html-keyboard-response', + stimulus: 'This trial will be repeated twice.' +} + +var node = { + timeline: [trial], + repetitions: 2 +} +``` + +The `repetitions` parameter can be used alongside other node parameters, such as timeline variables, loop functions, and/or conditional functions. If you are using `timeline_variables` and `randomize_order` is `true`, then the order of the timeline variables will re-randomize before every repetition. + +```javascript +var face_name_procedure = { + // timeline parameter hidden to save space ... + timeline_variables: [ + { face: 'person-1.jpg', name: 'Alex' }, + { face: 'person-2.jpg', name: 'Beth' }, + { face: 'person-3.jpg', name: 'Chad' }, + { face: 'person-4.jpg', name: 'Dave' } + ], + randomize_order: true, + repetitions: 3 +} +``` + ## Looping timelines -Any timeline can be looped using the `loop_function` option. The loop function should be a function that evaluates to `true` if the timeline should repeat, and `false` if the timeline should end. It receives a single parameter: the DataCollection object with all of the data from the trials executed in the last iteration of the timeline. The loop function will be evaluated after the timeline is completed. +Any timeline can be looped using the `loop_function` option. The loop function should be a function that evaluates to `true` if the timeline should repeat, and `false` if the timeline should end. It receives a single parameter, named `data` by convention. This parameter will be the [DataCollection object](/core_library/jspsych-data/#datacollection) with all of the data from the trials executed in the last iteration of the timeline. The loop function will be evaluated after the timeline is completed. ```javascript var trial = { @@ -337,7 +352,7 @@ var trial = { var loop_node = { timeline: [trial], loop_function: function(data){ - if(jsPsych.pluginAPI.convertKeyCharacterToKeyCode('r') == data.values()[0].key_press){ + if(jsPsych.pluginAPI.compareKeys(data.values()[0].response, 'r')){ return true; } else { return false; @@ -367,7 +382,7 @@ var if_node = { // get the data from the previous trial, // and check which key was pressed var data = jsPsych.data.get().last(1).values()[0]; - if(data.key_press == jsPsych.pluginAPI.convertKeyCharacterToKeyCode('s')){ + if(jsPsych.pluginAPI.compareKeys(data.response, 's')){ return false; } else { return true; @@ -385,3 +400,58 @@ jsPsych.init({ on_finish: function(){jsPsych.data.displayData(); } }); ``` + +## Timeline start and finish functions + +You can run a custom function at the start and end of a timeline node using the `on_timeline_start` and `on_timeline_finish` callback function parameters. These are functions that will run when the timeline starts and ends, respectively. + +```javascript +var procedure = { + timeline: [trial_1, trial_2], + on_timeline_start: function() { + console.log('The trial procedure just started.') + }, + on_timeline_finish: function() { + console.log('The trial procedure just finished.') + } +} +``` + +This works the same way with timeline variables. The `on_timeline_start` and `on_timeline_finish` functions will run when timeline variables trials start and end, respectively. + +```javascript +var face_name_procedure = { + // timeline parameter hidden to save space ... + timeline_variables: [ + { face: 'person-1.jpg', name: 'Alex' }, + { face: 'person-2.jpg', name: 'Beth' }, + { face: 'person-3.jpg', name: 'Chad' }, + { face: 'person-4.jpg', name: 'Dave' } + ], + randomize_order: true, + on_timeline_start: function() { + console.log('First trial is starting.') + }, + on_timeline_finish: function() { + console.log('Last trial just finished.') + } +} +``` + +When the `repetititons` option is used (and is greater than 1), these functions will run once per repetition of the timeline. + +```javascript +var repetition_count = 0; + +var procedure = { + timeline: [trial_1, trial_2], + repetitions: 3, + on_timeline_start: function() { + repetition_count++; + console.log('Repetition number ',repetition_count,' has just started.'); + }, + on_timeline_finish: function() { + console.log('Repetition number ',repetition_count,' has just finished.') + } +} +``` \ No newline at end of file diff --git a/docs/overview/trial.md b/docs/overview/trial.md deleted file mode 100644 index 024c6bf076..0000000000 --- a/docs/overview/trial.md +++ /dev/null @@ -1,142 +0,0 @@ -# Advanced Options for Trials - -The parameters available for a trial depend primarily on what plugin is used for the trial. However, there are several options that do not depend on the particular plugin; they are available for all trials. - -## The data parameter - -The `data` parameter enables tagging the trial with additional properties. This can be useful for storing properties of the trial that are not directly apparent from the values that the plugin records. The `data` parameter value should be an object that contains key-value pairs. - -A simple example is the [Flanker Task](https://en.wikipedia.org/wiki/Eriksen_flanker_task). In this experiment, participants respond to the direction of an arrow, pressing a key to the left for a left-pointing arrow (<) and a key to the right for a right-pointing arrow (>). The arrow appears in the center of *flankers*, or arrows that the participant should ignore. Those flankers can be congruent (>>>>>) or incongruent (<<><<). - -A trial for the Flanker Task written with jsPsych might look like this: - -```javascript -var trial = { - type: 'html-keyboard-response', - stimulus: '<<<<<', - choices: ['f','j'], - data: { - stimulus_type: 'congruent', - target_direction: 'left' - } -} -``` - -Note the use of the data parameter to add a property `stimulus_type` with the value `congruent` and a property `target_direction` with the value `left`. Having these properties recorded directly in the data simplifies data analysis, making it easy to aggregate data by `stimulus_type` and/or `target_direction`. - -## Inter-trial interval - -The default inter-trial interval (ITI) in jsPsych is 0 ms. This can be adjusted at the experiment-wide level by changing the `default_iti` parameter in `jsPsych.init()`. - -The ITI can also be controlled at the trial level through the `post_trial_gap` parameter. Setting this parameter to a positive integer *x* will cause a blank screen to display after the trial for *x* milliseconds. - -```javascript -var trial = { - type: 'html-keyboard-response', - stimulus: 'There will be a 1.5 second blank screen after this trial.', - post_trial_gap: 1500 -} -``` - -## The on_start event - -Immediately before a trial runs, there is an opportunity to run an arbitrary function through the `on_start` event handler. This event handler is passed a single argument containing an *editable* copy of the trial parameters. This event handler can therefore be used to alter the trial based on the state of the experiment, among other uses. - -```javascript -var trial = { - type: 'html-keyboard-response', - stimulus: '<<<<<', - choices: ['f','j'], - data: { - stimulus_type: 'congruent', - target_direction: 'left' - }, - on_start: function(trial){ - trial.stimulus = '<<><<'; - trial.data.stimulus_type = 'incongruent'; - } -} -``` - -## The on_finish event - -After a trial is completed, there is an opportunity to run an arbitrary function through the `on_finish` event handler. This event handler is passed a single argument containing an *editable* copy of the data recorded for that trial. This event handler can therefore be used to update the state of the experiment based on the data collected or modify the data collected. - -This can be useful to calculate new data properties that were unknowable at the start of the trial. For example, with the Flanker Task example above, the `on_finish` event could add a new property `correct`. - -```javascript -var trial = { - type: 'html-keyboard-response', - stimulus: '<<<<<', - choices: ['f','j'], - data: { - stimulus_type: 'congruent', - target_direction: 'left' - }, - on_finish: function(data){ - if(data.key_press == 70){// 70 is the numeric code for f - data.correct = true; // can add property correct by modify data object directly - } else { - data.correct = false; - } - } -} -``` - -## The on_load event - -The `on_load` callback can be added to any trial. The callback will trigger once the trial has completed loading. For most plugins, this will occur once the display has been initially updated but before any user interactions or timed events (e.g., animations) have occurred. - -#### Sample use -```javascript -var trial = { - type: 'image-keyboard-response', - stimulus: 'imgA.png', - on_load: function() { - console.log('The trial just finished loading.'); - } -}; -``` - -## Dynamic parameters - -Most plugins allow parameters to be functions. In a typical declaration of a jsPsych trial, parameters have to be known at the start of the experiment. This makes it impossible to alter the content of the trial based on the outcome of previous trials. When functions are used as parameters for a block of trials, the function is evaluated at the start of each trial, and the return value of the function is used as the parameter. This enables dynamic updating of the parameter based on data that a subject has generated. - -Here is a sketch of how this functionality could be used to display feedback to a subject in the Flanker Task. - -```javascript - -var timeline = []; - -var trial = { - type: 'html-keyboard-response', - stimulus: '<<<<<', - choices: ['f','j'], - data: { - stimulus_type: 'congruent', - target_direction: 'left' - }, - on_finish: function(data){ - if(data.key_press == 70){// 70 is the numeric code for f - data.correct = true; // can add property correct by modify data object directly - } else { - data.correct = false; - } - } -} - -var feedback = { - type: 'html-keyboard-response', - stimulus: function(){ - var last_trial_correct = jsPsych.data.get().last(1).values()[0].correct; - if(last_trial_correct){ - return "

Correct!

"; - } else { - return "

Wrong.

" - } - } -} - -timeline.push(trial, feedback); - -``` diff --git a/docs/plugins/creating-a-plugin.md b/docs/plugins/creating-a-plugin.md deleted file mode 100644 index 11996ac9ab..0000000000 --- a/docs/plugins/creating-a-plugin.md +++ /dev/null @@ -1,79 +0,0 @@ -# Creating a new plugin - -Creating new plugins is the way to add new kinds of tasks to jsPsych. A task can be virtually any kind of activity. If it can be implemented in JavaScript, then it almost certainly can be turned into a plugin. - -## What's in a plugin file? - -Plugin files follow a specific template. Adherence to the template is what allows jsPsych to run a plugin without knowing anything about what the plugin is doing. What makes plugins so flexible is that the template imposes very few requirements on the code. Here's what an empty plugin template looks like: - -``` -jsPsych.plugins['plugin-name'] = (function(){ - - var plugin = {}; - - plugin.info = { - name: 'plugin-name', - parameters: { - } - } - - plugin.trial = function(display_element, trial){ - jsPsych.finishTrial(); - } - - return plugin; - -})(); -``` - -This plugin will work! It defines a plugin called 'plugin-name', and it does absolutely nothing. However, it won't break the experiment, and jsPsych will understand that this is a valid plugin. - -Let's examine it in more detail. - -The overall structure of the plugin is defined using a module JavaScript design pattern. This pattern uses a technique called an anonymous closure. This is why the first line has `(function(){` and the last line is `})();`. The details aren't important, but if you want to learn more about it, [this is a nice overview](http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html). The reason this pattern is useful is because it allows for persistent state and private scope. In other words, the plugin is isolated and can't be altered by other plugins. - -The module, created by the `(function(){` `})();` expressions, contains an object called `plugin` that has two properties, `info` and `trial`. The `plugin` object is returned at the end of the module, which is what assigns the defined properties of `plugin` to `jsPsych['plugin-name']`. - -### plugin.trial - -The `trial` method is responsible for running a single trial. There are two parameters that are passed into the trial method. The first, `display_element`, is the DOM element where jsPsych content is being rendered. This parameter will be an `HTMLElement`. Generally, you don't need to worry about this parameter being in the correct format, and can assume that it is an `HMTLElement` and use methods of that class. The second, `trial`, is an object containing all of the parameters specified in the corresponding TimelineNode. - -The only requirement for the `trial` method is that it calls `jsPsych.finishTrial()` when it is done. This is how jsPsych knows to advance to the next trial in the experiment (or end the experiment if it is the last trial). The plugin can do whatever it needs to do before that point. - -Of course, there are other things that you will probably want the plugin to do besides just end. Here are some examples: - -#### Change the content of the display - -There are a few ways to change the content of the display. The `display_element` parameter of the trial method contains the DOM element for displaying content, so you can use various JavaScript methods for interaction with the display element. A common one is to change the `innerHTML`. - -```javascript -var html_content = '

This is the first paragraph

'; -html_content += '

This is the second paragraph

'; - -display_element.innerHTML = html_content; -``` -It is often appropriate to use `innerHTML` to clear the display at the end of a trial: - -```javascript -// clear the display -display_element.innerHTML = ''; -``` - -#### Write data - -Plugins exist to collect data, so saving data is obviously a crucial thing to do. You can pass an object of data as the parameter to `jsPsych.finishTrial()`: - -```javascript -var data = { - correct: true, - rt: 350 -} - -jsPsych.finishTrial(data) -``` - -The data recorded will be that `correct` is `true` and that `rt` is `350`. Additional data for the trial will also be collected automatically by the jsPsych library. - -## The plugin template - -An empty plugin template is included in the `plugins/template` folder. diff --git a/docs/plugins/jspsych-animation.md b/docs/plugins/jspsych-animation.md index 701d1ef9c4..c95568d830 100644 --- a/docs/plugins/jspsych-animation.md +++ b/docs/plugins/jspsych-animation.md @@ -4,7 +4,7 @@ This plugin displays a sequence of images at a fixed frame rate. The sequence ca ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -12,18 +12,18 @@ stimuli | array | *undefined* | Each element of the array is a path to an image frame_time | numeric | 250 | How long to display each image (in milliseconds). frame_isi | numeric | 0 | If greater than 0, then a gap will be shown between each image in the sequence. This parameter specifies the length of the gap. sequence_reps | numeric | 1 | How many times to show the entire sequence. There will be no gap (other than the gap specified by `frame_isi`) between repetitions. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. +choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key(s) to press). - +render_on_canvas | boolean | true | If true, the images will be drawn onto a canvas element. This prevents a blank screen (white flash) between consecutive images in some browsers, like Firefox and Edge. If false, the image will be shown via an img element, as in previous versions of jsPsych. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -animation_sequence | JSON | An array, encoded in JSON format. Each element of the array is an object that represents a stimulus in the animation sequence. Each object has a `stimulus` property, which is the image that was displayed, and a `time` property, which is the time in ms, measured from when the sequence began, that the stimulus was displayed. -responses | JSON | An array, encoded in JSON format. Each element of the array is an object representing a response given by the subject. Each object has a `stimulus` property, indicating which image was displayed when the key was pressed, an `rt` property, indicating the time of the key press relative to the start of the animation, and a `key_press` property, indicating which key was pressed. +animation_sequence | array | An array, where each element is an object that represents a stimulus in the animation sequence. Each object has a `stimulus` property, which is the image that was displayed, and a `time` property, which is the time in ms, measured from when the sequence began, that the stimulus was displayed. The array will be encoded in JSON format when data is saved using either the `.json()` or `.csv()` functions. +response | array | An array, where each element is an object representing a response given by the subject. Each object has a `stimulus` property, indicating which image was displayed when the key was pressed, an `rt` property, indicating the time of the key press relative to the start of the animation, and a `key_press` property, indicating which key was pressed. The array will be encoded in JSON format when data is saved using either the `.json()` or `.csv()` functions. ## Examples diff --git a/docs/plugins/jspsych-audio-button-response.md b/docs/plugins/jspsych-audio-button-response.md index f32c705f4a..7b240fd6a4 100644 --- a/docs/plugins/jspsych-audio-button-response.md +++ b/docs/plugins/jspsych-audio-button-response.md @@ -4,34 +4,35 @@ This plugin plays audio files and records responses generated with a button clic If the browser supports it, audio files are played using the WebAudio API. This allows for reasonably precise timing of the playback. The timing of responses generated is measured against the WebAudio specific clock, improving the measurement of response times. If the browser does not support the WebAudio API, then the audio file is played with HTML5 audio. -Audio files are automatically preloaded by jsPsych. However, if you are using timeline variables or another dynamic method to specify the audio stimulus you will need to [manually preload](/overview/media-preloading/#manual-preloading) the audio. +Audio files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the audio stimulus, you will need to [manually preload](/overview/media-preloading/#manual-preloading) the audio. -The trial can end when the subject responds, when the audio file has finished playing, or if the subject has failed to respond within a fixed length of time. +The trial can end when the subject responds, when the audio file has finished playing, or if the subject has failed to respond within a fixed length of time. You can also prevent a button response from being made before the audio has finished playing. ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | audio file | undefined | Path to audio file to be played. -choices | array of strings | [] | Labels for the buttons. Each different string in the array will generate a different button. -button_html | HTML string | `''` | A template of HTML for generating the button elements. You can override this to create customized buttons of various kinds. The string `%choice%` will be changed to the corresponding element of the `choices` array. You may also specify an array of strings, if you need different HTML to render for each button. If you do specify an array, the `choices` array and this array must have the same length. The HTML from position 0 in the `button_html` array will be used to create the button for element 0 in the `choices` array, and so on. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, the trial will wait for a response indefinitely. -margin_vertical | string | '0px' | Vertical margin of the button(s). -margin_horizontal | string | '8px' | Horizontal margin of the button(s). -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `timing_response` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. -trial_ends_after_audio | boolean | false | If true, then the trial will end as soon as the audio file finishes playing. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------------------------ | ---------------- | ---------------------------------------- | ---------------------------------------- | +| stimulus | audio file | *undefined* | Path to audio file to be played. | +| choices | array of strings | *undefined* | Labels for the buttons. Each different string in the array will generate a different button. | +| button_html | HTML string | `''` | A template of HTML for generating the button elements. You can override this to create customized buttons of various kinds. The string `%choice%` will be changed to the corresponding element of the `choices` array. You may also specify an array of strings, if you need different HTML to render for each button. If you do specify an array, the `choices` array and this array must have the same length. The HTML from position 0 in the `button_html` array will be used to create the button for element 0 in the `choices` array, and so on. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, the trial will wait for a response indefinitely. | +| margin_vertical | string | '0px' | Vertical margin of the button(s). | +| margin_horizontal | string | '8px' | Horizontal margin of the button(s). | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to listen to the stimulus for a fixed amount of time, even if they respond before the time is complete. | +| trial_ends_after_audio | boolean | false | If true, then the trial will end as soon as the audio file finishes playing. | +| response_allowed_while_playing | boolean | true | If true, then responses are allowed while the audio is playing. If false, then the audio must finish playing before the button choices are enabled and a response is accepted. Once the audio has played all the way through, the buttons are enabled and a response is allowed (including while the audio is being re-played via on-screen playback controls). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -button_pressed | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. +| Name | Type | Value | +| -------------- | ------- | ---------------------------------------- | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first began playing until the subject's response. | +| response | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. | ## Examples diff --git a/docs/plugins/jspsych-audio-keyboard-response.md b/docs/plugins/jspsych-audio-keyboard-response.md index f77131dcc9..e7e9290243 100644 --- a/docs/plugins/jspsych-audio-keyboard-response.md +++ b/docs/plugins/jspsych-audio-keyboard-response.md @@ -4,32 +4,33 @@ This plugin plays audio files and records responses generated with the keyboard. If the browser supports it, audio files are played using the WebAudio API. This allows for reasonably precise timing of the playback. The timing of responses generated is measured against the WebAudio specific clock, improving the measurement of response times. If the browser does not support the WebAudio API, then the audio file is played with HTML5 audio. -Audio files are automatically preloaded by jsPsych. However, if you are using timeline variables or another dynamic method to specify the audio stimulus you will need to [manually preload](/overview/media-preloading/#manual-preloading) the audio. +Audio files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the audio stimulus, then you will need to [manually preload](/overview/media-preloading/#manual-preloading) the audio. -The trial can end when the subject responds, when the audio file has finished playing, or if the subject has failed to respond within a fixed length of time. +The trial can end when the subject responds, when the audio file has finished playing, or if the subject has failed to respond within a fixed length of time. You can also prevent a keyboard response from being recorded before the audio has finished playing. ## Parameters -Parameters with a default value of undefined must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of undefined must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | audio file | undefined | Path to audio file to be played. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. -trial_ends_after_audio | boolean | false | If true, then the trial will end as soon as the audio file finishes playing. +| Parameter | Type | Default Value | Description | +| ------------------------------ | ---------------- | ------------------ | ---------------------------------------- | +| stimulus | audio file | undefined | Path to audio file to be played. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use set this parameter to `false` to force the subject to listen to the stimulus for a fixed amount of time, even if they respond before the time is complete. | +| trial_ends_after_audio | boolean | false | If true, then the trial will end as soon as the audio file finishes playing. | +| response_allowed_while_playing | boolean | true | If true, then responses are allowed while the audio is playing. If false, then the audio must finish playing before a keyboard response is accepted. Once the audio has played all the way through, a valid keyboard response is allowed (including while the audio is being re-played via on-screen playback controls). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -stimulus | string | Path to the audio file that played during the trial. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| response | string | Indicates which key the subject pressed. If no key was pressed before the trial ended, then the value will be `null`. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first began playing until the subject made a key response. If no key was pressed before the trial ended, then the value will be `null`. | +| stimulus | string | Path to the audio file that played during the trial. | ## Examples @@ -41,7 +42,7 @@ var trial = { stimulus: 'sound/tone.mp3', choices: ['e', 'i'], prompt: "

Is the pitch high or low? Press 'e' for low and 'i' for high.

", - response_ends_trial: false + response_ends_trial: false }; ``` diff --git a/docs/plugins/jspsych-audio-slider-response.md b/docs/plugins/jspsych-audio-slider-response.md index 76a9a1ce15..6326d20bb7 100644 --- a/docs/plugins/jspsych-audio-slider-response.md +++ b/docs/plugins/jspsych-audio-slider-response.md @@ -2,40 +2,42 @@ This plugin plays an audio file and allows the subject to respond by dragging a slider. -If the browser supports it, audio files are played using the WebAudio API.This allows for reasonably precise timing of the playback. The timing of responses generated is measured against the WebAudio specific clock, improving the measurement of response times. If the browser does not support the WebAudio API, then the audio file is played with HTML5 audio. +If the browser supports it, audio files are played using the WebAudio API. This allows for reasonably precise timing of the playback. The timing of responses generated is measured against the WebAudio specific clock, improving the measurement of response times. If the browser does not support the WebAudio API, then the audio file is played with HTML5 audio. -Audio files are automatically preloaded by jsPsych. However, if you are using timeline variables or another dynamic method to specify the audio stimulus you will need to [manually preload](/overview/media-preloading/#manual-preloading) the audio. +Audio files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the audio stimulus, then you will need to [manually preload](/overview/media-preloading/#manual-preloading) the audio. -The trial can end when the subject responds, or if the subject has failed to respond within a fixed length of time. +The trial can end when the subject responds, or if the subject has failed to respond within a fixed length of time. You can also prevent the slider response from being made before the audio has finished playing. ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | audio file | *undefined* | Audio file to be played -labels | array of strings | Labels displayed at equidistant locations on the slider. For example, two labels will be placed at the ends of the slider. Three labels would place two at the ends and one in the middle. Four will place two at the ends, and the other two will be at 33% and 67% of the slider width. -button_label | string | 'Continue' | Label of the button to end the trial. -min | integer | 0 | Sets the minimum value of the slider -max | integer | 100 | Sets the maximum value of the slider -start | integer | 50 | Sets the starting value of the slider -step | integer | 1 | Sets the step of the slider. This is the smallest amount by which the slider can change. -slider_width | integer | null | Set the width of the slider in pixels. If left null, then the width will be equal to the widest element in the display. -require_movement | boolean | false | If true, the subject must move the slider before clicking the continue button. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to listen to the stimulus for a fixed amount of time, even if they respond before the time is complete. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------------------------ | ---------------- | ------------- | ---------------------------------------- | +| stimulus | audio file | *undefined* | Audio file to be played | +| labels | array of strings | [] | Labels displayed at equidistant locations on the slider. For example, two labels will be placed at the ends of the slider. Three labels would place two at the ends and one in the middle. Four will place two at the ends, and the other two will be at 33% and 67% of the slider width. | +| button_label | string | 'Continue' | Label of the button to end the trial. | +| min | integer | 0 | Sets the minimum value of the slider | +| max | integer | 100 | Sets the maximum value of the slider | +| slider_start | integer | 50 | Sets the starting value of the slider | +| step | integer | 1 | Sets the step of the slider. This is the smallest amount by which the slider can change. | +| slider_width | integer | null | Set the width of the slider in pixels. If left null, then the width will be equal to the widest element in the display. | +| require_movement | boolean | false | If true, the subject must move the slider before clicking the continue button. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to listen to the stimulus for a fixed amount of time, even if they respond before the time is complete. | +| response_allowed_while_playing | boolean | true | If true, then responses are allowed while the audio is playing. If false, then the audio must finish playing before the slider is enabled and the trial can end via the next button click. Once the audio has played all the way through, the slider is enabled and a response is allowed (including while the audio is being re-played via on-screen playback controls). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -response | numeric | The numeric value of the slider. -rt | numeric | The time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -stimulus | string | The path of the audio file that was played. +| Name | Type | Value | +| ------------ | ------- | ---------------------------------------- | +| response | numeric | The numeric value of the slider. | +| rt | numeric | The time in milliseconds for the subject to make a response. The time is measured from when the stimulus first began playing until the subject's response. | +| stimulus | string | The path of the audio file that was played. | +| slider_start | numeric | The starting value of the slider. | ## Examples diff --git a/docs/plugins/jspsych-call-function.md b/docs/plugins/jspsych-call-function.md index c4c631d174..b1ffba32d3 100644 --- a/docs/plugins/jspsych-call-function.md +++ b/docs/plugins/jspsych-call-function.md @@ -6,7 +6,7 @@ The function cannot take any arguments. If arguments are needed, then an anonymo ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -16,7 +16,7 @@ async | boolean | `false` | Set to true if `func` is an asynchoronous function. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ diff --git a/docs/plugins/jspsych-canvas-button-response.md b/docs/plugins/jspsych-canvas-button-response.md new file mode 100644 index 0000000000..30e5006f88 --- /dev/null +++ b/docs/plugins/jspsych-canvas-button-response.md @@ -0,0 +1,66 @@ +# jspsych-canvas-button-response + +This plugin can be used to draw a stimulus on a [HTML canvas element](https://www.w3schools.com/html/html5_canvas.asp), and record a button click response and response time. The canvas stimulus can be useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically if the subject has failed to respond within a fixed length of time. One or more button choices will be displayed under the canvas, and the button style can be customized using HTML formatting. + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +Parameter | Type | Default Value | Description +----------|------|---------------|------------ +stimulus | function | *undefined* | The function to draw on the canvas. This function automatically takes a canvas element as its only argument, e.g. `function(c) {...}` or `function drawStim(c) {...}`, where `c` refers to the canvas element. Note that the stimulus function will still generally need to set the correct context itself, using a line like `let ctx = c.getContext("2d")`. +canvas_size | array | [500, 500] | Array that defines the size of the canvas element in pixels. First value is height, second value is width. +choices | array of strings | [] | Labels for the buttons. Each different string in the array will generate a different button. +button_html | HTML string | `''` | A template of HTML for generating the button elements. You can override this to create customized buttons of various kinds. The string `%choice%` will be changed to the corresponding element of the `choices` array. You may also specify an array of strings, if you need different HTML to render for each button. If you do specify an array, the `choices` array and this array must have the same length. The HTML from position 0 in the `button_html` array will be used to create the button for element 0 in the `choices` array, and so on. +prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., what question to answer). +trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, the trial will wait for a response indefinitely. +stimulus_duration | numeric | null | How long to display the stimulus in milliseconds. The visibility CSS property of the stimulus will be set to `hidden` after this time has elapsed. If this is null, then the stimulus will remain visible until the trial ends. +margin_vertical | string | '0px' | Vertical margin of the button(s). +margin_horizontal | string | '8px' | Horizontal margin of the button(s). +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +Name | Type | Value +-----|------|------ +rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. +response | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. + +Note: the canvas stimulus is *not* included in the trial data because it is a function. Any stimulus information that should be saved in the trial data can be added via the `data` parameter. + +## Examples + +### Drawing circles based on parameters + +```javascript +function filledCirc(canvas, radius, color){ + var ctx = canvas.getContext("2d"); + ctx.beginPath(); + ctx.arc(250, 250, radius, 0, 2 * Math.PI); + ctx.fillStyle = color; + ctx.fill() +} + +var circle_1 = { + type: 'canvas-button-response', + stimulus: function (c) { + filledCirc(c, 100, 'blue'); + }, + choices: ['Red', 'Green', 'Blue'], + prompt: '

What color is the circle?

', + data: {color: 'blue', radius: 100} +}; + +var circle_2 = { + type: 'canvas-button-response', + stimulus: function (c) { + filledCirc(c, 150, 'green'); + }, + choices: ['Larger', 'Smaller'], + prompt: '

Is this circle larger or smaller than the last one?

', + data: {color: 'green', radius: 150} +}; + +``` \ No newline at end of file diff --git a/docs/plugins/jspsych-canvas-keyboard-response.md b/docs/plugins/jspsych-canvas-keyboard-response.md new file mode 100644 index 0000000000..f03478397b --- /dev/null +++ b/docs/plugins/jspsych-canvas-keyboard-response.md @@ -0,0 +1,68 @@ +# jspsych-canvas-keyboard-response + +This plugin can be used to draw a stimulus on a [HTML canvas element](https://www.w3schools.com/html/html5_canvas.asp) and record a keyboard response. The canvas stimulus can be useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically if the subject has failed to respond within a fixed length of time. + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------------- | ---------------- | ------------------ | ---------------------------------------- | +| stimulus | function | *undefined* | The function to draw on the canvas. This function automatically takes a canvas element as its only argument, e.g. `function(c) {...}` or `function drawStim(c) {...}`, where `c` refers to the canvas element. Note that the stimulus function will still generally need to set the correct context itself, using a line like `let ctx = c.getContext("2d")`. | +| canvas_size | array | [500, 500] | Array that defines the size of the canvas element in pixels. First value is height, second value is width. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| stimulus_duration | numeric | null | How long to display the stimulus in milliseconds. The visibility CSS property of the stimulus will be set to `hidden` after this time has elapsed. If this is null, then the stimulus will remain visible until the trial ends. | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. | + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | + +Note: the canvas stimulus is *not* included in the trial data because it is a function. Any stimulus information that should be saved in the trial data can be added via the `data` parameter. + +## Examples + +### Draw rectangle and wait for response + +```javascript +function drawRect(c){ + var ctx = c.getContext('2d'); + ctx.beginPath(); + ctx.rect(30, 30, 200, 50); + ctx.stroke(); +} + +var trial = { + type: 'canvas-keyboard-response', + stimulus: drawRect, + choices: ['e','i'], + prompt: '

Is this a circle or a rectangle? Press "e" for circle and "i" for rectangle.

', + data: {shape: 'rectangle'} +} +``` + +### Draw circle, no response allowed + +```javascript +function drawCirc(c){ + var ctx = c.getContext('2d'); + ctx.beginPath(); + ctx.arc(100, 75, 50, 0, 2 * Math.PI); + ctx.stroke(); +} + +var trial = { + type: 'canvas-keyboard-response', + stimulus: drawCirc, + choices: jsPsych.NO_KEYS, + trial_duration: 1000, + data: {shape: 'circle', radius: 50} +} +``` \ No newline at end of file diff --git a/docs/plugins/jspsych-canvas-slider-response.md b/docs/plugins/jspsych-canvas-slider-response.md new file mode 100644 index 0000000000..e96942e979 --- /dev/null +++ b/docs/plugins/jspsych-canvas-slider-response.md @@ -0,0 +1,89 @@ +# jspsych-canvas-slider-response + +This plugin can be used to draw a stimulus on a [HTML canvas element](https://www.w3schools.com/html/html5_canvas.asp) and collect a response within a range of values, which is made by dragging a slider. The canvas stimulus can be useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically if the subject has failed to respond within a fixed length of time. + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +Parameter | Type | Default Value | Description +----------|------|---------------|------------ +stimulus | function | *undefined* | The function to draw on the canvas. This function automatically takes a canvas element as its only argument, e.g. `function(c) {...}` or `function drawStim(c) {...}`, where `c` refers to the canvas element. Note that the stimulus function will still generally need to set the correct context itself, using a line like `let ctx = c.getContext("2d")`. +canvas_size | array | [500, 500] | Array that defines the size of the canvas element in pixels. First value is height, second value is width. +labels | array of strings | [] | Labels displayed at equidistant locations on the slider. For example, two labels will be placed at the ends of the slider. Three labels would place two at the ends and one in the middle. Four will place two at the ends, and the other two will be at 33% and 67% of the slider width. +button_label | string | 'Continue' | Label of the button to end the trial. +min | integer | 0 | Sets the minimum value of the slider. +max | integer | 100 | Sets the maximum value of the slider. +slider_start | integer | 50 | Sets the starting value of the slider. +step | integer | 1 | Sets the step of the slider. This is the smallest amount by which the slider can change. +slider_width | integer | null | Set the width of the slider in pixels. If left null, then the width will be equal to the widest element in the display. +require_movement | boolean | false | If true, the subject must click the slider before clicking the continue button. +prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., what question to answer). +stimulus_duration | numeric | null | How long to display the stimulus in milliseconds. The visibility CSS property of the stimulus will be set to `hidden` after this time has elapsed. If this is null, then the stimulus will remain visible until the trial ends. +trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +Name | Type | Value +-----|------|------ +response | numeric | The numeric value of the slider. +rt | numeric | The time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. + +Note: the canvas stimulus is *not* included in the trial data because it is a function. Any stimulus information that should be saved in the trial data can be added via the `data` parameter. + +## Examples + +### Draw two squares + +```javascript +var colors = [#'FF3333', '#FF6A33']; + +function twoSquares(c) { + var ctx = c.getContext('2d'); + ctx.fillStyle = colors[0]; + ctx.fillRect(200, 70, 40, 40); + ctx.fillStyle = colors[1]; + ctx.fillRect(260, 70, 40, 40); +} + +var trial = { + type: 'canvas-slider-response', + stimulus: twoSquares, + labels: ['0','10'], + canvas_size: [200, 500], + prompt: '

How different would you say the colors of these two squares are on a scale from 0 (the same) to 10 (completely different)

', + data: {color1: colors[0], color2: colors[1]} +} +``` + +### Draw two squares with additional parameters + +```javascript +var colors; + +function twoSquares(c, colors) { + var ctx = c.getContext('2d'); + ctx.fillStyle = colors[0]; + ctx.fillRect(200, 70, 40, 40); + ctx.fillStyle = colors[1]; + ctx.fillRect(260, 70, 40, 40); +} + +var trial = { + type: 'canvas-slider-response', + stimulus: function(c) { + colors = ['darkred', 'cyan']; + twoSquares(c, colors); + }, + labels: ['Exactly
the same','Totally
different'], + canvas_size: [200, 500], + prompt: '

How different would you say the colors of these two squares are?

', + on_finish: function(data) { + data.color1 = colors[0]; + data.color2 = colors[1]; + } +}; +``` \ No newline at end of file diff --git a/docs/plugins/jspsych-categorize-animation.md b/docs/plugins/jspsych-categorize-animation.md index 9b9b6605a9..88c6a48043 100644 --- a/docs/plugins/jspsych-categorize-animation.md +++ b/docs/plugins/jspsych-categorize-animation.md @@ -4,32 +4,33 @@ The categorize animation plugin shows a sequence of images at a specified frame ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimuli | array | *undefined* | Each element of the array is a path to an image file. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -key_answer | numeric | *undefined* | A [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) indicating the correct response. -text_answer | string | "" | A text label that describes the correct answer. Used in conjunction with the `correct_text` and `incorrect_text` parameters. -correct_text | string | "Correct." | String to show when the correct answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). -incorrect_text | string | "Wrong." | String to show when the wrong answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). -frame_time | numeric | 500 | How long to display each image (in milliseconds). -sequence_reps | numeric | 1 | How many times to show the entire sequence. -allow_response_before_complete | boolean | false | If true, the subject can respond before the animation sequence finishes. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -feedback_duration | numeric | 2000 | How long to show the feedback (milliseconds). +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------------------------ | ---------------- | ------------------ | ---------------------------------------- | +| stimuli | array | *undefined* | Each element of the array is a path to an image file. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| key_answer | string | *undefined* | The key character indicating the correct response. | +| text_answer | string | "" | A text label that describes the correct answer. Used in conjunction with the `correct_text` and `incorrect_text` parameters. | +| correct_text | string | "Correct." | String to show when the correct answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). | +| incorrect_text | string | "Wrong." | String to show when the wrong answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). | +| frame_time | numeric | 500 | How long to display each image (in milliseconds). | +| sequence_reps | numeric | 1 | How many times to show the entire sequence. | +| allow_response_before_complete | boolean | false | If true, the subject can respond before the animation sequence finishes. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| feedback_duration | numeric | 2000 | How long to show the feedback (milliseconds). | +| render_on_canvas | boolean | true | If true, the images will be drawn onto a canvas element. This prevents a blank screen (white flash) between consecutive images in some browsers, like Firefox and Edge. If false, the image will be shown via an img element, as in previous versions of jsPsych. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | JSON | JSON encoded representation of the array of stimuli displayed in the trial. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -correct | boolean | `true` if the subject got the correct answer, `false` otherwise. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| stimulus | array | Array of stimuli displayed in the trial. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| correct | boolean | `true` if the subject got the correct answer, `false` otherwise. | ## Examples @@ -39,8 +40,8 @@ correct | boolean | `true` if the subject got the correct answer, `false` otherw var animation_trial = { type: 'categorize-animation', stimuli: ["img/face_3.jpg", "img/face_2.jpg", "img/face_4.jpg", "img/face_1.jpg"], - choices: [80, 81], // 80 = 'p', 81 = 'q' - key_answer: 81, // correct answer is 'q' for both trials + choices: ['p', 'q'], + key_answer: 'q', }; ``` @@ -50,8 +51,8 @@ var animation_trial = { var animation_trial = { type: 'categorize-animation', stimuli: ["img/face_3.jpg", "img/face_2.jpg", "img/face_4.jpg", "img/face_1.jpg"], - choices: [80, 81], // 80 = 'p', 81 = 'q' - key_answer: 81, // correct answer is 'q' for both trials + choices: ['p', 'q'], + key_answer: 'q', text_answer: 'Dax', // the label for the sequence is 'Dax' correct_text: 'Correct! This was a %ANS%.', incorrect_text: 'Incorrect. This was a %ANS%.' diff --git a/docs/plugins/jspsych-categorize-html.md b/docs/plugins/jspsych-categorize-html.md index b6499cab9b..6453b7cc18 100644 --- a/docs/plugins/jspsych-categorize-html.md +++ b/docs/plugins/jspsych-categorize-html.md @@ -4,35 +4,35 @@ The categorize html plugin shows an HTML object on the screen. The subject respo ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | html string | *undefined* | The HTML stimulus to display. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -key_answer | numeric | *undefined* | The [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) indicating the correct response. -text_answer | string | "" | A label that is associated with the correct answer. Used in conjunction with the `correct_text` and `incorrect_text` parameters. -correct_text | string | "Correct." | String to show when the correct answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the `%ANS%` string (see example below). -incorrect_text | string | "Wrong." | String to show when the wrong answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the `%ANS%` string (see example below). -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -force_correct_button_press | boolean | false | If set to true, then the subject must press the correct response key after feedback is given in order to advance to the next trial. -show_stim_with_feedback | boolean | true | If set to true, then the stimulus will be shown during feedback. If false, then only the text feedback will display during feedback. -show_feedback_on_timeout | boolean | false | If true, then category feedback will be displayed for an incorrect response after a timeout (timing_response is exceeded). If false, then a timeout message will be shown. -timeout_message | string | "Please respond faster." | The message to show on a timeout non-response. -stimulus_duration | numeric | null | How long to show the stimulus for (milliseconds). If null, then the stimulus is shown until a response is given. -feedback_duration | numeric | 2000 | How long to show the feedback for (milliseconds). -trial_duration | numeric | null | The maximum time allowed for a response. If null, then the experiment will wait indefinitely for a response. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| -------------------------- | ---------------- | ------------------------ | ---------------------------------------- | +| stimulus | html string | *undefined* | The HTML stimulus to display. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| key_answer | string | *undefined* | The key character indicating the correct response. | +| text_answer | string | "" | A label that is associated with the correct answer. Used in conjunction with the `correct_text` and `incorrect_text` parameters. | +| correct_text | string | "Correct." | String to show when the correct answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the `%ANS%` string (see example below). | +| incorrect_text | string | "Wrong." | String to show when the wrong answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the `%ANS%` string (see example below). | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| force_correct_button_press | boolean | false | If set to true, then the subject must press the correct response key after feedback is given in order to advance to the next trial. | +| show_stim_with_feedback | boolean | true | If set to true, then the stimulus will be shown during feedback. If false, then only the text feedback will display during feedback. | +| show_feedback_on_timeout | boolean | false | If true, then category feedback will be displayed for an incorrect response after a timeout (trial_duration is exceeded). If false, then a timeout message will be shown. | +| timeout_message | string | "Please respond faster." | The message to show on a timeout non-response. | +| stimulus_duration | numeric | null | How long to show the stimulus for (milliseconds). If null, then the stimulus is shown until a response is given. | +| feedback_duration | numeric | 2000 | How long to show the feedback for (milliseconds). | +| trial_duration | numeric | null | The maximum time allowed for a response. If null, then the experiment will wait indefinitely for a response. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | string | Either the path to the image file or the string containing the HTML formatted content that the subject saw on this trial. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -correct | boolean | `true` if the subject got the correct answer, `false` otherwise. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| stimulus | string | Either the path to the image file or the string containing the HTML formatted content that the subject saw on this trial. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| correct | boolean | `true` if the subject got the correct answer, `false` otherwise. | ## Examples @@ -42,11 +42,12 @@ correct | boolean | `true` if the subject got the correct answer, `false` otherw var categorization_trial = { type: 'categorize', stimulus: '

B

', - key_answer: 80, + key_answer: 'p', text_answer: 'letter', - choices: [80, 81], + choices: ['p', 'q'], correct_text: "

Correct, this is a %ANS%.

", incorrect_text: "

Incorrect, this is a %ANS%.

", - prompt: "

Press P for letter. Press Q for number.

" + prompt: "

Press p for letter. Press q for number.

" }; ``` + diff --git a/docs/plugins/jspsych-categorize-image.md b/docs/plugins/jspsych-categorize-image.md index cb0ae9597c..a63e5ad4dc 100644 --- a/docs/plugins/jspsych-categorize-image.md +++ b/docs/plugins/jspsych-categorize-image.md @@ -4,36 +4,36 @@ The categorize image plugin shows an image object on the screen. The subject res ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | string | *undefined* | The path to the image file. -key_answer | numeric | *undefined* | The [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) indicating the correct response. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -text_answer | string | "" | A label that is associated with the correct answer. Used in conjunction with the `correct_text` and `incorrect_text` parameters. -correct_text | string | "Correct." | String to show when the correct answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). -incorrect_text | string | "Wrong." | String to show when the wrong answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -force_correct_button_press | boolean | false | If set to true, then the subject must press the correct response key after feedback is given in order to advance to the next trial. -show_stim_with_feedback | boolean | true | If set to true, then the stimulus will be shown during feedback. If false, then only the text feedback will display during feedback. -show_feedback_on_timeout | boolean | false | If true, then category feedback will be displayed for an incorrect response after a timeout (timing_response is exceeded). If false, then a timeout message will be shown. -timeout_message | string | "Please respond faster." | The message to show on a timeout non-response. -stimulus_duration | numeric | null | How long to show the stimulus for (milliseconds). If null, then the stimulus is shown until a response is given. -feedback_duration | numeric | 2000 | How long to show the feedback for (milliseconds). -trial_duration | numeric | null | The maximum time allowed for a response. If null, then the experiment will wait indefinitely for a response. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| -------------------------- | ---------------- | ------------------------ | ---------------------------------------- | +| stimulus | string | *undefined* | The path to the image file. | +| key_answer | string | *undefined* | The key character indicating the correct response. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| text_answer | string | "" | A label that is associated with the correct answer. Used in conjunction with the `correct_text` and `incorrect_text` parameters. | +| correct_text | string | "Correct." | String to show when the correct answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). | +| incorrect_text | string | "Wrong." | String to show when the wrong answer is given. Can contain HTML formatting. The special string `%ANS%` can be used within the string. If present, the plugin will put the `text_answer` for the trial in place of the %ANS% string (see example below). | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| force_correct_button_press | boolean | false | If set to true, then the subject must press the correct response key after feedback is given in order to advance to the next trial. | +| show_stim_with_feedback | boolean | true | If set to true, then the stimulus will be shown during feedback. If false, then only the text feedback will display during feedback. | +| show_feedback_on_timeout | boolean | false | If true, then category feedback will be displayed for an incorrect response after a timeout (trial_duration is exceeded). If false, then a timeout message will be shown. | +| timeout_message | string | "Please respond faster." | The message to show on a timeout non-response. | +| stimulus_duration | numeric | null | How long to show the stimulus for (milliseconds). If null, then the stimulus is shown until a response is given. | +| feedback_duration | numeric | 2000 | How long to show the feedback for (milliseconds). | +| trial_duration | numeric | null | The maximum time allowed for a response. If null, then the experiment will wait indefinitely for a response. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | string | Either the path to the image file or the string containing the HTML formatted content that the subject saw on this trial. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -correct | boolean | `true` if the subject got the correct answer, `false` otherwise. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| stimulus | string | Either the path to the image file or the string containing the HTML formatted content that the subject saw on this trial. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| correct | boolean | `true` if the subject got the correct answer, `false` otherwise. | ## Examples @@ -43,9 +43,9 @@ correct | boolean | `true` if the subject got the correct answer, `false` otherw var categorization_trial = { type: 'categorize-image', stimulus: 'img/harrypotter.png', - key_answer: 71, + key_answer: 'g', text_answer: 'Gryffindor', - choices: [71, 72, 82, 83], + choices: ['g', 'h', 'r', 's'], correct_text: "

Correct! This person is a %ANS%.

", incorrect_text: "

Incorrect. This person is a %ANS%.

", prompt: "

Is this person a (G)ryffindor, (H)ufflepuff, (R)avenclaw, or (S)lytherin?

" diff --git a/docs/plugins/jspsych-cloze.md b/docs/plugins/jspsych-cloze.md index 8106dfd5ca..3e78acb601 100644 --- a/docs/plugins/jspsych-cloze.md +++ b/docs/plugins/jspsych-cloze.md @@ -1,23 +1,25 @@ -# jspsych-cloze +# *jspsych-cloze This plugin displays a text with certain words removed. Participants are asked to replace the missing items. Responses are recorded when clicking a button. Optionally, responses are evaluated and a function is called in case of differences, making it possible to inform participants about mistakes. ## Parameters -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -text | string | undefined | The cloze text to be displayed. Blanks are indicated by %% signs and automatically replaced by input fields. If there is a correct answer you want the system to check against, it must be typed between the two percentage signs (i.e. % correct solution %). -button_text | string | OK | Text of the button participants have to press for finishing the cloze test. -check_answers | boolean | false | Boolean value indicating if the answers given by participants should be compared against a correct solution given in the text (between % signs) after the button was clicked. If ```true```, answers are checked and in case of differences, the ```mistake_fn``` is called. In this case, the trial does not automatically finish. If ```false```, no checks are performed and the trial automatically ends when clicking the button. -mistake_fn | function | ```function(){}``` | Function called if ```check_answers``` is set to ```true``` and there is a difference between the participants answers and the correct solution provided in the text. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------- | -------- | ------------------ | ---------------------------------------- | +| text | string | *undefined* | The cloze text to be displayed. Blanks are indicated by %% signs and automatically replaced by input fields. If there is a correct answer you want the system to check against, it must be typed between the two percentage signs (i.e. % correct solution %). | +| button_text | string | OK | Text of the button participants have to press for finishing the cloze test. | +| check_answers | boolean | false | Boolean value indicating if the answers given by participants should be compared against a correct solution given in the text (between % signs) after the button was clicked. If ```true```, answers are checked and in case of differences, the ```mistake_fn``` is called. In this case, the trial does not automatically finish. If ```false```, no checks are performed and the trial automatically ends when clicking the button. | +| mistake_fn | function | ```function(){}``` | Function called if ```check_answers``` is set to ```true``` and there is a difference between the participants answers and the correct solution provided in the text. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -answers | array of strings | Answers the partcipant gave +| Name | Type | Value | +| -------- | ---------------- | --------------------------- | +| response | array of strings | Answers the partcipant gave | ## Examples diff --git a/docs/plugins/jspsych-external-html.md b/docs/plugins/jspsych-external-html.md index bc3a421358..ca00aea0aa 100644 --- a/docs/plugins/jspsych-external-html.md +++ b/docs/plugins/jspsych-external-html.md @@ -4,25 +4,25 @@ The HTML plugin displays an external HTML document (often a consent form). Eithe ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -url | string | *undefined* | The URL of the page to display. -cont_key | numeric | null | The key code a key to advance to the next trial. If left as null, then the subject will not be able to advance trials using the keyboard. -cont_btn | string | null | The ID of a clickable element on the page. When the element is clicked, the trial will advance. -check_fn | function | `function(){ return true; }` | This function is called with the jsPsych `display_element` as the only argument when the subject attempts to advance the trial. The trial will only advance if the function return `true`. This can be used to verify that the subject has correctly filled out a form before continuing, for example. -force_refresh | boolean | false | If `true`, then the plugin will avoid using the cached version of the HTML page to load if one exists. -execute_script | boolean | false | If `true`, then scripts on the remote page will be executed. +| Parameter | Type | Default Value | Description | +| -------------- | -------- | ---------------------------- | ---------------------------------------- | +| url | string | *undefined* | The URL of the page to display. | +| cont_key | string | null | The key character the subject can use to advance to the next trial. If left as null, then the subject will not be able to advance trials using the keyboard. | +| cont_btn | string | null | The ID of a clickable element on the page. When the element is clicked, the trial will advance. | +| check_fn | function | `function(){ return true; }` | This function is called with the jsPsych `display_element` as the only argument when the subject attempts to advance the trial. The trial will only advance if the function return `true`. This can be used to verify that the subject has correctly filled out a form before continuing, for example. | +| force_refresh | boolean | false | If `true`, then the plugin will avoid using the cached version of the HTML page to load if one exists. | +| execute_script | boolean | false | If `true`, then scripts on the remote page will be executed. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -url | string | The URL of the page. -rt | numeric | The response time in milliseconds for the subject to finish the trial. +| Name | Type | Value | +| ---- | ------- | ---------------------------------------- | +| url | string | The URL of the page. | +| rt | numeric | The response time in milliseconds for the subject to finish the trial. | ## Examples diff --git a/docs/plugins/jspsych-free-sort.md b/docs/plugins/jspsych-free-sort.md index 2064915437..2902863c30 100644 --- a/docs/plugins/jspsych-free-sort.md +++ b/docs/plugins/jspsych-free-sort.md @@ -1,37 +1,42 @@ # jspsych-free-sort plugin -The free-sort plugin displays a collection of images on the screen that the subject can interact with by clicking and dragging. All of the moves that the subject performs are recorded. - -## Dependency - -This plugin requires the jQuery UI library, available at [https://jqueryui.com/](https://jqueryui.com/). You must include the library in the `` section of your experiment page. You can use the [Google-hosted version of the library](https://developers.google.com/speed/libraries/#jquery-ui). +The free-sort plugin displays one or more images on the screen that the participant can interact with by clicking and dragging with a mouse, or touching and dragging with a touchscreen device. When the trial starts, the images can be positioned outside or inside the sort area. All images must be moved into the sorting area before the participant can click a button to end the trial. All of the moves that the participant performs are recorded, as well as the final positions of all images. This plugin could be useful when asking participants to position images based on similarity to one another, or to recall image spatial locations. ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ stimuli | array | *undefined* | Each element of this array is an image path. stim_height | numeric | 100 | The height of the images in pixels. stim_width | numeric | 100 | The width of the images in pixels. -sort_area_height | numeric | 800 | The height of the container that subjects can move the stimuli in. Stimuli will be constrained to this area. -sort_area_width | numeric | 800 | The width of the container that subjects can move the stimuli in. Stimuli will be constrained to this area. -prompt | string | null | This string can contain HTML markup. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). +scale_factor | numeric | 1.5 | How much larger to make the stimulus while moving (1 = no scaling). +sort_area_height | numeric | 800 | The height of the container that participants can move the stimuli in. Stimuli will be constrained to this area. +sort_area_width | numeric | 800 | The width of the container that participants can move the stimuli in. Stimuli will be constrained to this area. +sort_area_shape | string | "ellipse" | The shape of the sorting area, can be "ellipse" or "square". +prompt | string | null | This string can contain HTML markup. The intention is that it can be used to provide a reminder about the action the participant is supposed to take (e.g., which key to press). prompt_location | string | "above" | Indicates whether to show the prompt `"above"` or `"below"` the sorting area. -button_label | string | 'Continue' | The text that appears on the button to continue to the next trial. - +button_label | string | 'Continue' | The text that appears on the button to continue to the next trial. +change_border_background_color | boolean | true | If `true`, the sort area border color will change while items are being moved in and out of the sort area, and the background color will change once all items have been moved into the sort area. If `false`, the border will remain black and the background will remain white throughout the trial. +border_color_in | string | '#a1d99b' | If `change_border_background_color` is `true`, the sort area border will change to this color when an item is being moved into the sort area, and the background will change to this color when all of the items have been moved into the sort area. +border_color_out | string | '#fc9272' | If `change_border_background_color` is `true`, this will be the color of the sort area border when there are one or more items that still need to be moved into the sort area. +border_width | numeric | null | The width in pixels of the border around the sort area. If `null`, the border width will be 3% of the `sort_area_height`. +counter_text_unfinished | string | You still need to place %n% item%s% inside the sort area. | Text to display when there are one or more items that still need to be placed in the sort area. If "%n%" is included in the string, it will be replaced with the number of items that still need to be moved inside. If "%s%" is included in the string, a "s" will be included when the number of items remaining is greater than one. +counter_text_finished | string | All items placed. Feel free to reposition items if necessary. | Text that will take the place of the counter_text_unfinished text when all items have been moved inside the sort area. +stim_starts_inside | boolean | false | If `false`, the images will be positioned to the left and right of the sort area when the trial loads. If `true`, the images will be positioned at random locations inside the sort area when the trial loads. +column_spread_factor | numeric | 1 | When the images appear outside the sort area, this determines the x-axis spread of the image columns. Default value is 1. Values less than 1 will compress the image columns along the x-axis, and values greater than 1 will spread them farther apart. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -init_locations | JSON string | A JSON-encoded object representing the initial locations of all the stimuli in the sorting area. The object is an array with one element per stimulus. Each element in the array has a "src", "x", and "y" value. "src" is the image path, and "x" and "y" are the object location. -moves | JSON string | A JSON-encoded object representing all of the moves the participant made when sorting. The object is an array with each element representing a move. Each element in the array has a "src", "x", and "y" value. "src" is the image path, and "x" and "y" are the object location after the move. -final_locations | JSON string | A JSON-encoded object representing the final locations of all the stimuli in the sorting area. The object is an array with one element per stimulus. Each element in the array has a "src", "x", and "y" value. "src" is the image path, and "x" and "y" are the object location. -rt | numeric | The response time in milliseconds for the subject to finish all sorting. +init_locations | array | An array containing objects representing the initial locations of all the stimuli in the sorting area. Each element in the array represents a stimulus, and has a "src", "x", and "y" value. "src" is the image path, and "x" and "y" are the object location. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. +moves | array | An array containing objects representing all of the moves the participant made when sorting. Each object represents a move. Each element in the array has a "src", "x", and "y" value. "src" is the image path, and "x" and "y" are the object location after the move. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. +final_locations | array | An array containing objects representing the final locations of all the stimuli in the sorting area. Each element in the array represents a stimulus, and has a "src", "x", and "y" value. "src" is the image path, and "x" and "y" are the object location. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. +rt | numeric | The response time in milliseconds for the participant to finish all sorting. ## Examples diff --git a/docs/plugins/jspsych-fullscreen.md b/docs/plugins/jspsych-fullscreen.md index f5b528881c..9f668566e4 100644 --- a/docs/plugins/jspsych-fullscreen.md +++ b/docs/plugins/jspsych-fullscreen.md @@ -6,7 +6,7 @@ Safari does not support keyboard input when the browser is in fullscreen mode. T ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -17,7 +17,7 @@ delay_after | numeric | 1000 | The length of time to delay after entering fullsc ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ diff --git a/docs/plugins/jspsych-html-button-response.md b/docs/plugins/jspsych-html-button-response.md index cd81f1b07f..951b948aae 100644 --- a/docs/plugins/jspsych-html-button-response.md +++ b/docs/plugins/jspsych-html-button-response.md @@ -4,8 +4,7 @@ This plugin displays HTML content and records responses generated by button clic ## Parameters -Parameters with a default value of *undefined* must be specified. -Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -17,16 +16,16 @@ trial_duration | numeric | null | How long to wait for the subject to make a res stimulus_duration | numeric | null | How long to display the stimulus in milliseconds. The visibility CSS property of the stimulus will be set to `hidden` after this time has elapsed. If this is null, then the stimulus will remain visible until the trial ends. margin_vertical | string | '0px' | Vertical margin of the button(s). margin_horizontal | string | '8px' | Horizontal margin of the button(s). -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `timing_response` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -button_pressed | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. +response | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. stimulus | string | The HTML content that was displayed on the screen. ## Examples diff --git a/docs/plugins/jspsych-html-keyboard-response.md b/docs/plugins/jspsych-html-keyboard-response.md index 495f611558..201d0a8753 100644 --- a/docs/plugins/jspsych-html-keyboard-response.md +++ b/docs/plugins/jspsych-html-keyboard-response.md @@ -5,26 +5,26 @@ This plugin displays HTML content and records responses generated with the keybo ## Parameters -Parameters with a default value of undefined must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of undefined must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | HTML string | *undefined* | The string to be displayed. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -stimulus_duration | numeric | null | How long to display the stimulus in milliseconds. The visibility CSS property of the stimulus will be set to `hidden` after this time has elapsed. If this is null, then the stimulus will remain visible until the trial ends. -trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +| Parameter | Type | Default Value | Description | +| ------------------- | ---------------- | ------------------ | ---------------------------------------- | +| stimulus | HTML string | *undefined* | The string to be displayed. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| stimulus_duration | numeric | null | How long to display the stimulus in milliseconds. The visibility CSS property of the stimulus will be set to `hidden` after this time has elapsed. If this is null, then the stimulus will remain visible until the trial ends. | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -stimulus | string | The HTML content that was displayed on the screen. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| stimulus | string | The HTML content that was displayed on the screen. | ## Examples diff --git a/docs/plugins/jspsych-html-slider-response.md b/docs/plugins/jspsych-html-slider-response.md index 91afc4e2a0..956ba87d7a 100644 --- a/docs/plugins/jspsych-html-slider-response.md +++ b/docs/plugins/jspsych-html-slider-response.md @@ -4,7 +4,7 @@ This plugin displays HTML content and allows the subject to respond by dragging ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -13,24 +13,25 @@ labels | array of strings | [] | Labels displayed at equidistant locations on th button_label | string | 'Continue' | Label of the button to end the trial. min | integer | 0 | Sets the minimum value of the slider. max | integer | 100 | Sets the maximum value of the slider. -start | integer | 50 | Sets the starting value of the slider +slider_start | integer | 50 | Sets the starting value of the slider step | integer | 1 | Sets the step of the slider. This is the smallest amount by which the slider can change. slider_width | integer | null | Set the width of the slider in pixels. If left null, then the width will be equal to the widest element in the display. require_movement | boolean | false | If true, the subject must move the slider before clicking the continue button. prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). stimulus_duration | numeric | null | How long to display the stimulus in milliseconds. The visibility CSS property of the stimulus will be set to `hidden` after this time has elapsed. If this is null, then the stimulus will remain visible until the trial ends. trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ response | numeric | The numeric value of the slider. rt | numeric | The time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. stimulus | string | The HTML content that was displayed on the screen. +slider_start | numeric | The starting value of the slider. ## Examples diff --git a/docs/plugins/jspsych-iat-html.md b/docs/plugins/jspsych-iat-html.md index f4345fc1aa..bd69950cb0 100644 --- a/docs/plugins/jspsych-iat-html.md +++ b/docs/plugins/jspsych-iat-html.md @@ -4,34 +4,34 @@ This plugin runs a single trial of the [implicit association test (IAT)](https:/ ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | string | *undefined* | The HTML-format stimulus to display. -html_when_wrong | string | `X` | The content to display when a user presses the wrong key. -bottom_instructions | string | `

If you press the wrong key, a red X will appear. Press any key to continue.

` | Instructions about making a wrong key press and whether another key press is needed to continue. -force_correct_key_press | boolean | false | If this is true and the user presses the wrong key then they have to press the other key to continue. An example would be two keys 'E' and 'I'. If the key associated with the stimulus is 'E' and key 'I' was pressed, then pressing 'E' is needed to continue the trial. When this is true, then parameter key_to_move_forward is not used. -display_feedback | boolean | false | If true, then `html_when_wrong` and `wrong_image_name` is required. If false, timing_response is needed and trial will continue automatically. -left_category_key | string | 'E' | Key press that is associated with the left_category_label. -right_category_key | string | 'I' | Key press that is associated with the right_category_label. -left_category_label | string | ['left'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the left side of the page. -right_category_label | string | ['right'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the right side of the page. -stim_key_association | string | 'undefined' | Inputs are either 'left' or 'right'. It will associate the stimulus with the key presses on the left or right side of the page(left_category_key or right_category_key). -key_to_move_forward | array of characters | [jsPsych.ALL_KEYS] | This array contains the characters the subject is allowed to press to move on to the next trial if their key press was incorrect and feedback was displayed. Can also have 'other key' as an option which will only allow the user to select the right key to move forward. -timing_response | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `timing_response` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +| Parameter | Type | Default Value | Description | +| ----------------------- | ---------------- | ---------------------------------------- | ---------------------------------------- | +| stimulus | string | *undefined* | The HTML-formatted stimulus to display. | +| html_when_wrong | string | `X` | The content to display when a user presses the wrong key. | +| bottom_instructions | string | `

If you press the wrong key, a red X will appear. Press any key to continue.

` | Instructions about making a wrong key press and whether another key press is needed to continue. | +| force_correct_key_press | boolean | false | If this is `true` and the user presses the wrong key then they have to press the other key to continue. An example would be two keys 'e' and 'i'. If the key associated with the stimulus is 'e' and key 'i' was pressed, then pressing 'e' is needed to continue the trial. When this is `true`, then parameter `key_to_move_forward` is not used. | +| display_feedback | boolean | false | If `true`, then `html_when_wrong` and `wrong_image_name` is required. If `false`, `trial_duration` is needed and trial will continue automatically. | +| left_category_key | string | 'e' | Key press that is associated with the `left_category_label`. | +| right_category_key | string | 'i' | Key press that is associated with the `right_category_label`. | +| left_category_label | string | ['left'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the left side of the page. | +| right_category_label | string | ['right'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the right side of the page. | +| stim_key_association | string | *undefined* | Either 'left' or 'right'. This indicates whether the stimulus is associated with the key press and category on the left or right side of the page (`left_category_key` or `right_category_key`). | +| key_to_move_forward | array of strings | jsPsych.ALL_KEYS | This array contains the characters the subject is allowed to press to move on to the next trial if their key press was incorrect and feedback was displayed. Can also have 'other key' as an option which will only allow the user to select the right key to move forward. | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as `null` for the trial and the trial will end. If the value of this parameter is `null`, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | string | Either the path to the image file or the string containing the HTML formatted content that the subject saw on this trial. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -correct | boolean | boolean of whether the user's key press was correct for the given image or incorrect. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| stimulus | string | Either the path to the image file or the string containing the HTML-formatted content that the subject saw on this trial. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| correct | boolean | Boolean indicating whether the user's key press was correct or incorrect for the given stimulus. | ## Examples @@ -44,9 +44,9 @@ var trial_block = { bottom_instructions: '

If you press the wrong key, a red X will appear. Press the other key to continue

', force_correct_key_press: true, display_feedback: true, - timing_response: 3000, //Only if display_feedback is false - left_category_key: 'E', - right_category_key: 'I', + trial_duration: 3000, //Only if display_feedback is false + left_category_key: 'e', + right_category_key: 'i', left_category_label: ['OLD'], right_category_label: ['YOUNG'], response_ends_trial: true diff --git a/docs/plugins/jspsych-iat-image.md b/docs/plugins/jspsych-iat-image.md index 33ae5f4bc5..943b503b7e 100644 --- a/docs/plugins/jspsych-iat-image.md +++ b/docs/plugins/jspsych-iat-image.md @@ -1,37 +1,37 @@ -# jspsych-iat-image plugin +jspsych-iat-image plugin This plugin runs a single trial of the [implicit association test (IAT)](https://implicit.harvard.edu/implicit/iatdetails.html), using an image as the stimulus. ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | string | *undefined* | The stimulus to display. The path to an image. -htm_when_wrong | string | `X` | The image to display when a user presses the wrong key. -bottom_instructions | string | `

If you press the wrong key, a red X will appear. Press any key to continue.

` | Instructions about making a wrong key press and whether another key press is needed to continue. -force_correct_key_press | boolean | false | If this is true and the user presses the wrong key then they have to press the other key to continue. An example would be two keys 'E' and 'I'. If the key associated with the stimulus is 'E' and key 'I' was pressed, then pressing 'E' is needed to continue the trial. When this is true, then parameter key_to_move_forward is not used. -display_feedback | boolean | false | If true, then image_when_wrong and wrong_image_name is required. If false, timing_response is needed and trial will continue automatically. -left_category_key | string | 'E' | Key press that is associated with the left_category_label. -right_category_key | string | 'I' | Key press that is associated with the right_category_label. -left_category_label | string | ['left'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the left side of the page. -right_category_label | string | ['right'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the right side of the page. -stim_key_association | string | 'undefined' | Inputs are either 'left' or 'right'. It will associate the stimulus with the key presses on the left or right side of the page(left_category_key or right_category_key). -key_to_move_forward | array of characters | [jsPsych.ALL_KEYS] | This array contains the characters the subject is allowed to press to move on to the next trial if their key press was incorrect and feedback was displayed. Can also have 'other key' as an option which will only allow the user to select the right key to move forward. -timing_response | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `timing_response` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +| Parameter | Type | Default Value | Description | +| ----------------------- | ------------------- | ---------------------------------------- | ---------------------------------------- | +| stimulus | string | *undefined* | The stimulus to display. The path to an image. | +| html_when_wrong | string | `X` | The HTML to display when a user presses the wrong key. | +| bottom_instructions | string | `

If you press the wrong key, a red X will appear. Press any key to continue.

` | Instructions about making a wrong key press and whether another key press is needed to continue. | +| force_correct_key_press | boolean | false | If this is `true` and the user presses the wrong key then they have to press the other key to continue. An example would be two keys 'e' and 'i'. If the key associated with the stimulus is 'e' and key 'i' was pressed, then pressing 'e' is needed to continue the trial. When this is `true`, then parameter `key_to_move_forward` is not used. | +| display_feedback | boolean | false | If `true`, then `image_when_wrong` and `wrong_image_name` are required. If `false`, `trial_duration` is needed and trial will continue automatically. | +| left_category_key | string | 'e' | Key press that is associated with the `left_category_label`. | +| right_category_key | string | 'i' | Key press that is associated with the `right_category_label`. | +| left_category_label | string | ['left'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the left side of the page. | +| right_category_label | string | ['right'] | An array that contains the words/labels associated with a certain stimulus. The labels are aligned to the right side of the page. | +| stim_key_association | string | 'undefined' | Either 'left' or 'right'. This indicates whether the stimulus is associated with the key press and category on the left or right side of the page (`left_category_key` or `right_category_key`). | +| key_to_move_forward | array of characters | jsPsych.ALL_KEYS | This array contains the characters the subject is allowed to press to move on to the next trial if their key press was incorrect and feedback was displayed. Can also have 'other key' as an option which will only allow the user to select the right key to move forward. | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as `null` for the trial and the trial will end. If the value of this parameter is `null`, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | string | Either the path to the image file or the string containing the HTML formatted content that the subject saw on this trial. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -correct | boolean | boolean of whether the user's key press was correct for the given image or incorrect. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| stimulus | string | Either the path to the image file or the string containing the HTML-formatted content that the subject saw on this trial. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| correct | boolean | Boolean indicating whether the user's key press was correct or incorrect for the given image. | ## Examples @@ -44,9 +44,9 @@ var trial_block = { bottom_instructions: '

If you press the wrong key, a red X will appear. Press the other key to continue

', force_correct_key_press: true, display_feedback: true, - timing_response: 3000, //Only if display_feedback is false - left_category_key: 'E', - right_category_key: 'I', + trial_duration: 3000, //Only if display_feedback is false + left_category_key: 'e', + right_category_key: 'i', left_category_label: ['OLD'], right_category_label: ['YOUNG'], response_ends_trial: true diff --git a/docs/plugins/jspsych-image-button-response.md b/docs/plugins/jspsych-image-button-response.md index c699ab38d7..b5ce2c5af6 100644 --- a/docs/plugins/jspsych-image-button-response.md +++ b/docs/plugins/jspsych-image-button-response.md @@ -2,17 +2,18 @@ This plugin displays an image and records responses generated with a button click. The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically if the subject has failed to respond within a fixed length of time. The button itself can be customized using HTML formatting. +Image files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the image stimulus, you will need to [manually preload](/overview/media-preloading/#manual-preloading) the images. + ## Parameters -Parameters with a default value of *undefined* must be specified. -Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ stimulus | string | undefined | The path of the image file to be displayed. stimulus_height | integer | null | Set the height of the image in pixels. If left null (no value specified), then the image will display at its natural height. stimulus_width | integer | null | Set the width of the image in pixels. If left null (no value specified), then the image will display at its natural width. -maintain_aspect_ration | boolean | true | If setting *only* the width or *only* the height and this parameter is true, then the other dimension will be scaled to maintain the image's aspect ratio. +maintain_aspect_ratio | boolean | true | If setting *only* the width or *only* the height and this parameter is true, then the other dimension will be scaled to maintain the image's aspect ratio. choices | array of strings | [] | Labels for the buttons. Each different string in the array will generate a different button. button_html | HTML string | `''` | A template of HTML for generating the button elements. You can override this to create customized buttons of various kinds. The string `%choice%` will be changed to the corresponding element of the `choices` array. You may also specify an array of strings, if you need different HTML to render for each button. If you do specify an array, the `choices` array and this array must have the same length. The HTML from position 0 in the `button_html` array will be used to create the button for element 0 in the `choices` array, and so on. prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). @@ -20,16 +21,17 @@ stimulus_duration | numeric | null | How long to show the stimulus for in millis trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, the trial will wait for a response indefinitely. margin_vertical | string | '0px' | Vertical margin of the button(s). margin_horizontal | string | '8px' | Horizontal margin of the button(s). -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `timing_response` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +render_on_canvas | boolean | true | If true, the image will be drawn onto a canvas element. This prevents a blank screen (white flash) between consecutive image trials in some browsers, like Firefox and Edge. If false, the image will be shown via an img element, as in previous versions of jsPsych. If the stimulus is an **animated gif**, you must set this parameter to false, because the canvas rendering method will only present static images. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -button_pressed | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. +response | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. stimulus | string | The path of the image that was displayed. ## Examples diff --git a/docs/plugins/jspsych-image-keyboard-response.md b/docs/plugins/jspsych-image-keyboard-response.md index 2974f81b62..1fcd20b3b2 100644 --- a/docs/plugins/jspsych-image-keyboard-response.md +++ b/docs/plugins/jspsych-image-keyboard-response.md @@ -2,32 +2,34 @@ This plugin displays and image and records responses generated with the keyboard. The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically if the subject has failed to respond within a fixed length of time. +Image files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the image stimulus, you will need to [manually preload](/overview/media-preloading/#manual-preloading) the images. ## Parameters -Parameters with a default value of undefined must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimulus | string | *undefined* | The path of the image file to be displayed. -stimulus_height | integer | null | Set the height of the image in pixels. If left null (no value specified), then the image will display at its natural height. -stimulus_width | integer | null | Set the width of the image in pixels. If left null (no value specified), then the image will display at its natural width. -maintain_aspect_ration | boolean | true | If setting *only* the width or *only* the height and this parameter is true, then the other dimension will be scaled to maintain the image's aspect ratio. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -stimulus_duration | numeric | null | How long to show the stimulus for in milliseconds. If the value is null, then the stimulus will be shown until the subject makes a response. -trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of undefined must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| --------------------- | ---------------- | ------------------ | ---------------------------------------- | +| stimulus | string | *undefined* | The path of the image file to be displayed. | +| stimulus_height | integer | null | Set the height of the image in pixels. If left null (no value specified), then the image will display at its natural height. | +| stimulus_width | integer | null | Set the width of the image in pixels. If left null (no value specified), then the image will display at its natural width. | +| maintain_aspect_ratio | boolean | true | If setting *only* the width or *only* the height and this parameter is true, then the other dimension will be scaled to maintain the image's aspect ratio. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| stimulus_duration | numeric | null | How long to show the stimulus for in milliseconds. If the value is `null`, then the stimulus will be shown until the subject makes a response. | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is `null`, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. | +| render_on_canvas | boolean | true | If `true`, the image will be drawn onto a canvas element. This prevents a blank screen (white flash) between consecutive image trials in some browsers, like Firefox and Edge. If `false`, the image will be shown via an img element, as in previous versions of jsPsych. If the stimulus is an **animated gif**, you must set this parameter to false, because the canvas rendering method will only present static images. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -stimulus | string | The path of the image that was displayed. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| stimulus | string | The path of the image that was displayed. | ## Examples diff --git a/docs/plugins/jspsych-image-slider-response.md b/docs/plugins/jspsych-image-slider-response.md index 108a890268..b9f5697621 100644 --- a/docs/plugins/jspsych-image-slider-response.md +++ b/docs/plugins/jspsych-image-slider-response.md @@ -2,9 +2,11 @@ This plugin displays and image and allows the subject to respond by dragging a slider. +Image files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the image stimulus, you will need to [manually preload](/overview/media-preloading/#manual-preloading) the images. + ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -16,24 +18,26 @@ labels | array of strings | [] | Labels displayed at equidistant locations on th button_label | string | 'Continue' | Label of the button to advance/submit min | integer | 0 | Sets the minimum value of the slider max | integer | 100 | Sets the maximum value of the slider -start | integer | 50 | Sets the starting value of the slider +slider_start | integer | 50 | Sets the starting value of the slider step | integer | 1 | Sets the step of the slider slider_width | integer | null | Set the width of the slider in pixels. If left null, then the width will be equal to the widest element in the display. require_movement | boolean | false | If true, the subject must move the slider before clicking the continue button. prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). stimulus_duration | numeric | null | How long to show the stimulus for in milliseconds. If the value is null, then the stimulus will be shown until the subject makes a response. trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +render_on_canvas | boolean | true | If true, the image will be drawn onto a canvas element. This prevents a blank screen (white flash) between consecutive image trials in some browsers, like Firefox and Edge. If false, the image will be shown via an img element, as in previous versions of jsPsych. If the stimulus is an **animated gif**, you must set this parameter to false, because the canvas rendering method will only present static images. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ response | numeric | The numeric value of the slider. rt | numeric | The time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. stimulus | string | The path of the image that was displayed. +slider_start | numeric | The starting value of the slider. ## Examples diff --git a/docs/plugins/jspsych-instructions.md b/docs/plugins/jspsych-instructions.md index ad6db13ce6..efb063fcad 100644 --- a/docs/plugins/jspsych-instructions.md +++ b/docs/plugins/jspsych-instructions.md @@ -2,30 +2,31 @@ This plugin is for showing instructions to the subject. It allows subjects to navigate through multiple pages of instructions at their own pace, recording how long the subject spends on each page. Navigation can be done using the mouse or keyboard. Subjects can be allowed to navigate forwards and backwards through pages, if desired. -## Parameters - -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -pages | array | *undefined* | Each element of the array is the content for a single page. Each page should be an HTML-formatted string. -key_forward | key code | 'rightarrow' | This is the key that the subject can press in order to advance to the next page. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). -key_backward | key code | 'leftarrow' | This is the key that the subject can press to return to the previous page. -allow_backward | boolean | true | If true, the subject can return to previous pages of the instructions. If false, they may only advace to the next page. -allow_keys | boolean | true | If true, the subject can use keyboard keys to navigate the pages. If false, they may not. -show_clickable_nav | boolean | false | If true, then a `Previous` and `Next` button will be displayed beneath the instructions. Subjects can click the buttons to navigate. -button_label_previous | string | 'Previous' | The text that appears on the button to go backwards. -button_label_next | string | 'Next' | The text that appears on the button to go forwards. - +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| --------------------- | ------- | ------------- | ---------------------------------------- | +| pages | array | *undefined* | Each element of the array is the content for a single page. Each page should be an HTML-formatted string. | +| key_forward | string | 'ArrowRight' | This is the key that the subject can press in order to advance to the next page. This key should be specified as a string (e.g., `'a'`, `'ArrowLeft'`, `' '`, `'Enter'`). | +| key_backward | string | 'ArrowLeft' | This is the key that the subject can press to return to the previous page. This key should be specified as a string (e.g., `'a'`, `'ArrowLeft'`, `' '`, `'Enter'`). | +| allow_backward | boolean | true | If true, the subject can return to previous pages of the instructions. If false, they may only advace to the next page. | +| allow_keys | boolean | true | If `true`, the subject can use keyboard keys to navigate the pages. If `false`, they may not. | +| show_clickable_nav | boolean | false | If true, then a `Previous` and `Next` button will be displayed beneath the instructions. Subjects can click the buttons to navigate. | +| button_label_previous | string | 'Previous' | The text that appears on the button to go backwards. | +| button_label_next | string | 'Next' | The text that appears on the button to go forwards. | +| show_page_number | boolean | false | If true, and clickable navigation is enabled, then Page x/y will be shown between the nav buttons. | +| page_label | string | 'Page' | The text that appears before x/y pages displayed when show_page_number is true. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -view_history | JSON string | A JSON string containing the order of pages the subject viewed (including when the subject returned to previous pages) and the time spent viewing each page. -rt | numeric | The response time in milliseconds for the subject to view all of the pages. +| Name | Type | Value | +| ------------ | ----------- | ---------------------------------------- | +| view_history | array | An array containing the order of pages the subject viewed (including when the subject returned to previous pages) and the time spent viewing each page. Each object in the array represents a single page view, and contains keys called `page_index` (the page number, starting with 0) and `viewing_time` (duration of the page view). This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| rt | numeric | The response time in milliseconds for the subject to view all of the pages. | ## Example diff --git a/docs/plugins/jspsych-maxdiff.md b/docs/plugins/jspsych-maxdiff.md new file mode 100644 index 0000000000..3792f89c0f --- /dev/null +++ b/docs/plugins/jspsych-maxdiff.md @@ -0,0 +1,41 @@ +# jspsych-maxdiff plugin + +The maxdiff plugin displays a table with rows of alternatives to be selected for two mutually-exclusive categories, typically as 'most' or 'least' on a particular criteria (e.g. importance, preference, similarity). The participant responds by selecting one radio button corresponding to an alternative in both the left and right response columns. The same alternative cannot be endorsed on both the left and right response columns (e.g. 'most' and 'least') simultaneously. + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +Parameter | Type | Default Value | Description +----------|------|---------------|------------ +alternatives | array | *undefined* | An array of one or more alternatives of string type to fill the rows of the maxdiff table. If `required` is true, then the array must contain two or more alternatives, so that at least one can be selected for both the left and right columns. +labels | array | *undefined* | An array with exactly two labels of string type to display as column headings (to the left and right of the alternatives) for responses on the criteria of interest. +randomize_alternative_order | boolean | `false` | If true, the display order of `alternatives` is randomly determined at the start of the trial. +preamble | string | empty string | HTML formatted string to display at the top of the page above the maxdiff table. +required | boolean | `false` | If true, prevents the user from submitting the response and proceeding until a radio button in both the left and right response columns has been selected. +button_label | string | 'Continue' | Label of the button. + + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +Name | Type | Value +-----|------|------ +rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the maxdiff table first appears on the screen until the subject's response. +labels | object | An object with two keys, `left` and `right`, containing the labels (strings) corresponding to the left and right response columns. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. +response | object | An object with two keys, `left` and `right`, containing the alternatives selected on the left and right columns. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. + + +## Examples + +#### Basic example + +```javascript +var maxdiff_page = { + type: 'maxdiff', + alternatives: ['apple', 'orange', 'pear', 'banana'], + labels: ['Most Preferred', 'Least Preferred'], + preamble: '

Please select your most preferred and least preferred fruits.

' +}; +``` \ No newline at end of file diff --git a/docs/plugins/jspsych-preload.md b/docs/plugins/jspsych-preload.md new file mode 100644 index 0000000000..ed8e2a3ead --- /dev/null +++ b/docs/plugins/jspsych-preload.md @@ -0,0 +1,128 @@ +# jspsych-preload + +This plugin loads images, audio, and video files. It is used for loading files into the browser's memory before they are needed in the experiment, in order to improve stimulus and response timing, and avoid disruption to the experiment flow. We recommend using this plugin anytime you are loading media files, and especially when your experiment requires large and/or many media files. See the [Media Preloading page](/overview/media-preloading/) for more information. + +The preload trial will end as soon as all files have loaded successfully. The trial will end or stop with an error message when one of these two scenarios occurs (whichever comes first): (a) all files have not finished loading when the `max_load_time` duration is reached, or (b) all file requests have responded with either a load or fail event, and one or more files has failed to load. The `continue_after_error` parameter determines whether the trial will stop with an error message or end (allowing the experiment to continue) when preloading is not successful. + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. While there are no specific parameters that are required, the plugin expects to be given a set of files to load through one or more of the following parameters: `auto_preload` or `trials` (for automatic loading), and/or `images`, `audio`, `video` (for manual loading). To automatically load files based on a timeline of trials, either set the `auto_preload` parameter is `true` (to load files based on the main timeline passed to `jsPsych.init`) or use the `trials` parameter to load files based on a specific subset of trials. To manually load a set of files, use the `images`, `audio`, and `video` parameters. You can combine automatic and manual loading methods in a single preload trial. + +All other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| --------------------- | -------------- | -------------------------------- | ---------------------------------------- | +| auto_preload | boolean | false | If `true`, the plugin will preload any files that can be automatically preloaded based on the main experiment timeline that is passed to `jsPsych.init`. If `false`, any file(s) to be preloaded should be specified by passing a timeline array to the `trials` parameter and/or an array of file paths to the `images`, `audio`, and/or `video` parameters. Setting this parameter to `false` is useful when you plan to preload your files in smaller batches throughout the experiment. | +| trials | timeline array | [] | An array containing one or more jsPsych trial or timeline objects. This parameter is useful when you want to automatically preload stimuli files from a specific subset of the experiment. See [Creating an Experiment: The Timeline](/overview/timeline) for information on constructing timelines. | +| images | array | [] | Array containing file paths for one or more image files to preload. This option is typically used for image files that can't be automatically preloaded from the timeline. | +| audio | array | [] | Array containing file paths for one or more audio files to preload. This option is typically used for audio files that can't be automatically preloaded from the timeline. | +| video | array | [] | Array containing file paths for one or more video files to preload. This option is typically used for video files that can't be automatically preloaded from the timeline. | +| message | HTML string | null | HTML-formatted message to show above the progress bar while the files are loading. If `null`, then no message is shown. | +| show_progress_bar | boolean | true | If `true`, a progress bar will be shown while the files are loading. If `false`, no progress bar is shown. | +| continue_after_error | boolean | false | If `false`, then the experiment will stop during this trial if either (a) one or more of the files fails to load, and/or (b) all files do not finish loading before the `max_load_time` duration is reached. The trial will display the `error_message`, as well as the detailed error messages if `show_detailed_errors` is `true`. If `true`, the experiment will continue even if loading fails or times out, and information about loading success/failure will be stored in the trial data (see "Data Generated" below). | +| error_message | HTML string | 'The experiment failed to load.' | HTML-formatted message to be shown on the page after loading fails or times out. Only applies when `continue_after_error` is `false`. | +| show_detailed_errors | boolean | false | If `true`, and if `continue_after_error` is `false`, then a list of detailed errors will be shown below the `error_message`. This list will contain the file paths for any files that produced a loading failure, as well as a message indicating that loading timed out, if that was the case. This setting is intended to help the researcher with testing/debugging. If `false`, and if `continue_after_error` is `false`, then only the `error_message` will be shown if loading fails or times out. | +| max_load_time | numeric | null | Duration to wait, in milliseconds, for all files to load before loading times out. If one or more files has not finished loading within this time limit, then the trial will stop with an error (if `continue_after_error` is `false`), or the trial will end with information about the loading time-out in the trial data (see "Data Generated" below). If `null`, the trial will wait indefinitely for all files to either load or produce an error. | +| on_error | function | null | Function to be called immediately after a file loading request has returned an error. The function receives a single argument, which is the file path that produced the error. This callback is cancelled as soon as the trial ends. See example below. | +| on_success | function | null | Function to be called immediately after a file has successfully loaded. The function receives a single argument, which is the file path that finished loading. This callback is cancelled as soon as the trial ends. See example below. | + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins/#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +| Name | Type | Value | +| -------------- | ------- | ---------------------------------------- | +| success | boolean | If `true`, then all files loaded successfully within the `max_load_time`. If `false`, then one or more file requests returned a failure and/or the file loading did not complete within the `max_load_time` duration. | +| timeout | boolean | If `true`, then the files did not finish loading within the `max_load_time` duration. If `false`, then the file loading did not timeout. Note that when the preload trial does not timeout (`timeout: false`), it is still possible for loading to fail (`success: false`). This happens if one or more files fails to load and all file requests trigger either a success or failure event before the `max_load_time` duration. | +| failed_images | array | One or more image file paths that produced a loading failure before the trial ended. | +| failed_audio | array | One or more audio file paths that produced a loading failure before the trial ended. | +| failed_video | array | One or more video file paths that produced a loading failure before the trial ended. | + + +## Examples + +#### Loading files automatically based on the main timeline + +```javascript +var preload = { + type: 'preload', + auto_preload: true // automatically load all files based on the main timeline +}; + +// define other trials to add to the timeline... + +jsPsych.init({ + timeline: [preload, trial1, trial2, trial3] +}); +``` + +#### Loading files manually + +```javascript +var preload = { + type: 'preload', + images: ['file1.png'] +}; +``` + +#### Combining automatic and manual methods + +```javascript +// automatically load stimuli from the main timeline, +// and manually add any other stimuli files that can't be loaded automatically +var preload = { + type: 'preload', + auto_preload: true, + images: ['image1.png','image2.png'] +}; + +// define other trials to add to the timeline... + +jsPsych.init({ + timeline: [preload, trial1, trial2, trial3] +}); +``` + +#### Loading files in batches + +```javascript +var block_1 = { + timeline: [...] +} + +var block_2 = { + timeline: [...] +} + +var preload_1 = { + type: 'preload', + trials: block_1 // automatically load block_1 stimuli +}; + +var preload_2 = { + type: 'preload', + trials: block_2 // automatically load block_2 stimuli +}; + +jsPsych.init( + // add each preload trial to the timeline before the appropriate trial block + timeline: [preload_1, block_1, preload_2, block_2] +) +``` + +#### Using the on_success and on_error functions + +```javascript +var preload = { + type: 'preload', + audio: ['sound.mp3'], + on_success: function(file) { + console.log('File loaded: ',file); + }, + on_error: function(file) { + console.log('Error loading file: ',file); + } +}; +``` + +For more examples, see the jspsych-preload.html file in the jsPsych examples folder and the [Media Preloading](/overview/media-preloading) documentation page. \ No newline at end of file diff --git a/docs/plugins/jspsych-rdk.md b/docs/plugins/jspsych-rdk.md index 15bb0c6c14..81b1ed4a11 100644 --- a/docs/plugins/jspsych-rdk.md +++ b/docs/plugins/jspsych-rdk.md @@ -5,44 +5,44 @@ This plugin displays a Random Dot Kinematogram (RDK) and allows the subject to r We would appreciate it if you cited this paper when you use the RDK: Rajananda, S., Lau, H. & Odegaard, B., (2018). A Random-Dot Kinematogram for Web-Based Vision Research. Journal of Open Research Software. 6(1), p.6. DOI: [http://doi.org/10.5334/jors.194] -For optimal performance, fullscreen mode should be manually triggered by the user (e.g. F11 key in Chrome for Windows). Usage of the default Fullscreen trigger from the JsPsych API library with this plugin might result in the stimuli being displayed incorrectly. +For optimal performance, fullscreen mode should be manually triggered by the user (e.g. F11 key in Chrome for Windows). Usage of the default Fullscreen trigger from the jsPsych API library with this plugin might result in the stimuli being displayed incorrectly. ## Parameters -Parameters can be left unspecified if the default value is acceptable. - -|Parameter|Type|Default Value| Descripton| -|---------|----|-------------|-----------| -|choices|array|[]|The valid keys that the subject can press as a response. Must be an array of chars or numbers (corresponding to JavaScript character codes). If left unspecified, any key is a valid key.| -|correct_choice|array, char, or number|undefined|The keys that are considered the correct response for that particular trial. Can be a single char, a single number, an array of chars, or an array of numbers. Numbers here correspond to the JavaScript character codes. This needs to be linked with the `coherent_direction` parameter (See Examples section below for an illustration.) This is used to determine whether the subject chose the correct response. The boolean indicating whether or not the subject chose the correct response is returned in the `correct` key of the data object. | -|trial_duration|numeric|500|The amount of time that the stimulus is displayed on the screen in ms. If -1, the stimulus will be displayed until the subject keys in a valid response. (`choices` parameter must contain valid keys or else the stimuli will run indefinitely).| -|response_ends_trial|boolean|true|If true, then the subject's response will end the trial. If false, the stimuli will be presented for the full `trial_duration` (the response will be recorded as long as the subject responds within the trial duration).| -|number_of_apertures|numeric|1|The number of apertures or RDKs on the screen. If set to more than one, remember to set the location (i.e., aperture_center_x and aperture_center_y) parameters to separate them.
In addition, each aperture can be customized individually by passing in an array of values as the parameter (see example below). If a single value (not an array) is passed as the parameter, then all apertures will have the same parameter.| -|number_of_dots|numeric|300|Number of dots per set. Equivalent to number of dots per frame.| -|number_of_sets|numeric|1|Number of sets to cycle through. Each frame displays one set of dots. (E.g. If 2 sets of dots, frame 1 will display dots from set 1, frame 2 will display dots from set 2, frame 3 will display sets from set 1, etc.)| -|coherent_direction|numeric|0|The direction of movement for coherent dots in degrees. 0 degrees is in the 3 o'clock direction, and increasing this number moves counterclockwise. (E.g. 12 o'clock is 90, 9 o'clock is 180, etc.) Range is 0 - 360.| -|coherence|numeric|0.5|The proportion of dots that move together in the coherent direction. Range is 0 to 1.| -|opposite_coherence|numeric|0|The proportion of moving in the direction opposite of the coherent direction. Range is 0 to (1-coherence).| -|dot_radius|numeric|2|The radius of each individual dot in pixels.| -|dot_life|numeric|-1|The number of frames that pass before a dot disappears and reappears in a new frame. -1 denotes that the dot life is infinite (i.e., a dot will only disappear and reappear if it moves out of the aperture).| -|move_distance|numeric|1|The number of pixel lengths the dot will move in each frame (analogous to speed of dots).| -|aperture_width|numeric|600|The width of the aperture in pixels. For a square aperture, this will determine both the width and height. For circular aperture, this will determine the diameter.| -|aperture_height|numeric|400|The height of the aperture in pixels. For square and circle apertures, this will be ignored.| -|dot_color|string|"white"|The color of the dots.| -|background_color|string|"gray"|The color of the background.| -|RDK_type|numeric|3|The Signal Selection Rule (Same/Different) and Noise Type (Random Position/Walk/Direction):

1 - Same && Random Position
2 - Same && Random Walk
3 - Same && Random Direction
4 - Different && Random Position
5 - Different && Random Walk
6 - Different && Random Direction

(See 'RDK parameter' below for more detailed information)
| -|aperture_type|numeric|2|The shape of the aperture.

1 - Circle
2 - Ellipse
3 - Square
4 - Rectangle
| -|reinsert_type|numeric|2|The type of reinsertion of a dot that has gone out of bounds

1 - Randomly appear anywhere in the aperture
2 - Appear on the opposite edge of the aperture. For squares and rectangles, a random point on the opposite edge is chosen as the reinsertion point. For circles and ellipses, the exit point is reflected about center to become the reinsertion point.
| -|aperture_center_x|numeric|window.innerWidth/2|The x-coordinate of the center of the aperture, in pixels.
| -|aperture_center_y|numeric|window.innerHeight/2|The y-coordinate of the center of the aperture, in pixels.
| -|fixation_cross|boolean|false|Whether or not a fixation cross is presented in the middle of the screen.
| -|fixation_cross_width|numeric|20|The width of the fixation cross in pixels.
| -|fixation_cross_height|numeric|20|The height of the fixation cross in pixels.
| -|fixation_cross_color|string|"black"|The color of the fixation cross.
| -|fixation_cross_thickness|numeric|1|The thickness of the fixation cross in pixels.
| -|border|boolean|false|The presence of a border around the aperture.
| -|border_thickness|numeric|1|The thickness of the border in pixels.
| -|border_color|string|"black"|The color of the border.
| +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Descripton | +| ------------------------ | ---------------- | -------------------- | ---------------------------------------- | +| choices | array of strings | jsPsych.ALL_KEYS | The valid keys that the subject can press as a response. Must be an array of strings. If left unspecified, any key is a valid key. | +| correct_choice | array or string | *undefined* | The keys that are considered the correct response for that particular trial. Can be a single string or an array of strings. This needs to be linked with the `coherent_direction` parameter (see Examples section below for an illustration). This is used to determine whether the subject chose the correct response. The boolean indicating whether or not the subject chose the correct response is returned in the `correct` key of the data object. | +| trial_duration | numeric | 500 | The amount of time that the stimulus is displayed on the screen in ms. If -1, the stimulus will be displayed until the subject keys in a valid response. (`choices` parameter must contain valid keys or else the stimuli will run indefinitely). | +| response_ends_trial | boolean | true | If `true`, then the subject's response will end the trial. If `false`, the stimuli will be presented for the full `trial_duration` (the response will be recorded as long as the subject responds within the trial duration). | +| number_of_apertures | numeric | 1 | The number of apertures or RDKs on the screen. If set to more than one, remember to set the location (i.e., aperture_center_x and aperture_center_y) parameters to separate them.
In addition, each aperture can be customized individually by passing in an array of values as the parameter (see example below). If a single value (not an array) is passed as the parameter, then all apertures will have the same parameter. | +| number_of_dots | numeric | 300 | Number of dots per set. Equivalent to number of dots per frame. | +| number_of_sets | numeric | 1 | Number of sets to cycle through. Each frame displays one set of dots. (E.g. If 2 sets of dots, frame 1 will display dots from set 1, frame 2 will display dots from set 2, frame 3 will display sets from set 1, etc.) | +| coherent_direction | numeric | 0 | The direction of movement for coherent dots in degrees. 0 degrees is in the 3 o'clock direction, and increasing this number moves counterclockwise. (E.g. 12 o'clock is 90, 9 o'clock is 180, etc.) Range is 0 - 360. | +| coherence | numeric | 0.5 | The proportion of dots that move together in the coherent direction. Range is 0 to 1. | +| opposite_coherence | numeric | 0 | The proportion of moving in the direction opposite of the coherent direction. Range is 0 to (1-coherence). | +| dot_radius | numeric | 2 | The radius of each individual dot in pixels. | +| dot_life | numeric | -1 | The number of frames that pass before a dot disappears and reappears in a new frame. -1 denotes that the dot life is infinite (i.e., a dot will only disappear and reappear if it moves out of the aperture). | +| move_distance | numeric | 1 | The number of pixel lengths the dot will move in each frame (analogous to speed of dots). | +| aperture_width | numeric | 600 | The width of the aperture in pixels. For a square aperture, this will determine both the width and height. For circular aperture, this will determine the diameter. | +| aperture_height | numeric | 400 | The height of the aperture in pixels. For square and circle apertures, this will be ignored. | +| dot_color | string | "white" | The color of the dots. | +| background_color | string | "gray" | The color of the background. | +| RDK_type | numeric | 3 | The Signal Selection Rule (Same/Different) and Noise Type (Random Position/Walk/Direction):

1 - Same && Random Position
2 - Same && Random Walk
3 - Same && Random Direction
4 - Different && Random Position
5 - Different && Random Walk
6 - Different && Random Direction

(See 'RDK parameter' below for more detailed information)
| +| aperture_type | numeric | 2 | The shape of the aperture.

1 - Circle
2 - Ellipse
3 - Square
4 - Rectangle
| +| reinsert_type | numeric | 2 | The type of reinsertion of a dot that has gone out of bounds

1 - Randomly appear anywhere in the aperture
2 - Appear on the opposite edge of the aperture. For squares and rectangles, a random point on the opposite edge is chosen as the reinsertion point. For circles and ellipses, the exit point is reflected about center to become the reinsertion point.
| +| aperture_center_x | numeric | window.innerWidth/2 | The x-coordinate of the center of the aperture, in pixels.
| +| aperture_center_y | numeric | window.innerHeight/2 | The y-coordinate of the center of the aperture, in pixels.
| +| fixation_cross | boolean | false | Whether or not a fixation cross is presented in the middle of the screen.
| +| fixation_cross_width | numeric | 20 | The width of the fixation cross in pixels.
| +| fixation_cross_height | numeric | 20 | The height of the fixation cross in pixels.
| +| fixation_cross_color | string | "black" | The color of the fixation cross.
| +| fixation_cross_thickness | numeric | 1 | The thickness of the fixation cross in pixels.
| +| border | boolean | false | The presence of a border around the aperture.
| +| border_thickness | numeric | 1 | The thickness of the border in pixels.
| +| border_color | string | "black" | The color of the border.
| ### RDK type parameter ** See Fig. 1 in Scase, Braddick, and Raymond (1996) for a visual depiction of these different signal selection rules and noise types. @@ -59,19 +59,18 @@ Parameters can be left unspecified if the default value is acceptable. ## Data Generated -In addition to the default data collected by all plugins, this plugin collects all parameter data described above and the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects all parameter data described above and the following data for each trial. - -|Name|Type|Value| -|----|----|-----| -|rt|numeric|The response time in ms for the subject to make a response.| -|key_press|numeric|The key that the subject pressed. The value corresponds to the Javascript Char Code (Key Code).| -|correct|boolean|Whether or not the subject's key press corresponded to those provided in correct_choice.| -|frame_rate|numeric|The average frame rate for the trial. 0 denotes that the subject responded before the appearance of the second frame.| -|number_of_frames|numeric|The number of frames that was shown in this trial.| -|frame_rate_array|JSON string|The array that holds the number of miliseconds for each frame in this trial.| -|canvas_width|numeric|The width of the canvas in pixels.| -|canvas_height|numeric|The height of the canvas in pixels.| +| Name | Type | Value | +| ---------------- | ----------- | ---------------------------------------- | +| rt | numeric | The response time in ms for the subject to make a response. | +| response | string | The key that the subject pressed. | +| correct | boolean | Whether or not the subject's key press corresponded to those provided in correct_choice. | +| frame_rate | numeric | The average frame rate for the trial. 0 denotes that the subject responded before the appearance of the second frame. | +| number_of_frames | numeric | The number of frames that was shown in this trial. | +| frame_rate_array | array | The array that holds the number of miliseconds for each frame in this trial. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| canvas_width | numeric | The width of the canvas in pixels. | +| canvas_height | numeric | The height of the canvas in pixels. | ## Example @@ -80,12 +79,12 @@ In addition to the default data collected by all plugins, this plugin collects a ```javascript var trial_right = { coherent_direction: 0, - correct_choice: "P" + correct_choice: "p" }; var trial_left = { coherent_direction: 180, - correct_choice: "Q" + correct_choice: "q" }; ``` diff --git a/docs/plugins/jspsych-reconstruction.md b/docs/plugins/jspsych-reconstruction.md index 1f7a4bbb01..471d9c2307 100644 --- a/docs/plugins/jspsych-reconstruction.md +++ b/docs/plugins/jspsych-reconstruction.md @@ -6,20 +6,20 @@ The stimulus must be defined through a function that returns an HTML-formatted s ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ stim_function | function | *undefined* | A function with a single parameter that returns an HTML-formatted string representing the stimulus. starting_value | numeric | 0.5 | The starting value of the stimulus parameter. step_size | numeric | 0.05 | The change in the stimulus parameter caused by pressing one of the modification keys. -key_increase | key code | 'h' | The key to press for increasing the parameter value. -key_decrease | key code | 'g' | The key to press for decreasing the parameter value. +key_increase | string | 'h' | The key to press for increasing the parameter value. +key_decrease | string | 'g' | The key to press for decreasing the parameter value. button_label | string | 'Continue' | The text that appears on the button to finish the trial. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ diff --git a/docs/plugins/jspsych-resize.md b/docs/plugins/jspsych-resize.md index 211699693f..26f6403fae 100644 --- a/docs/plugins/jspsych-resize.md +++ b/docs/plugins/jspsych-resize.md @@ -4,7 +4,7 @@ This plugin displays a resizable div container that allows the user to drag unti ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -15,6 +15,15 @@ prompt | string | `''` | HTML content to display below the resizable box, and ab button_label | string | 'Continue' | Label to display on the button to complete calibration. starting_size | numeric | 100 | The initial size of the box, in pixels, along the largest dimension. The aspect ratio will be set automatically to match the item width and height. +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +Name | Type | Value +-----|------|------ +final_width_px | numeric | Final width of the resizable div container, in pixels. +scale_factor | numeric | Scaling factor that will be applied to the div containing jsPsych content. + ## Examples #### Measuring a credit card and resizing the display to have 150 pixels equal an inch. diff --git a/docs/plugins/jspsych-same-different-html.md b/docs/plugins/jspsych-same-different-html.md index ccfe309f09..3df3493f0a 100644 --- a/docs/plugins/jspsych-same-different-html.md +++ b/docs/plugins/jspsych-same-different-html.md @@ -4,38 +4,38 @@ The same-different-html plugin displays two stimuli sequentially. Stimuli are HT ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimuli | array | *undefined* | A pair of stimuli, represented as an array with two entries, one for each stimulus. A stimulus is a string containing valid HTML markup. Stimuli will be shown in the order that they are defined in the array. -answer | string | *undefined* | Either `'same'` or `'different'`. -same_key | numeric or string | 'Q' | The key that subjects should press to indicate that the two stimuli are the same. -different_key | numeric or string | 'P' | The key that subjects should press to indicate that the two stimuli are different. -timing_first_stim | numeric | 1000 | How long to show the first stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject presses any key. -timing_gap | numeric | 500 | How long to show a blank screen in between the two stimuli. -timing_second_stim | numeric | 1000 | How long to show the second stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject responds. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). +| Parameter | Type | Default Value | Description | +| -------------------- | ------- | ------------- | ---------------------------------------- | +| stimuli | array | *undefined* | A pair of stimuli, represented as an array with two entries, one for each stimulus. A stimulus is a string containing valid HTML markup. Stimuli will be shown in the order that they are defined in the array. | +| answer | string | *undefined* | Either `'same'` or `'different'`. | +| same_key | string | 'q' | The key that subjects should press to indicate that the two stimuli are the same. | +| different_key | string | 'p' | The key that subjects should press to indicate that the two stimuli are different. | +| first_stim_duration | numeric | 1000 | How long to show the first stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject presses any key. | +| gap_duration | numeric | 500 | How long to show a blank screen in between the two stimuli. | +| second_stim_duration | numeric | 1000 | How long to show the second stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject responds. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | string | An JSON-encoded array of length 2 containing either the path to the image file or the string containing the HTML formatted content that the subject saw for each trial. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. -correct | boolean | `true` if the subject's response matched the `answer` for this trial. -answer | string | The correct answer to the trial, either `'same'` or `'different'`. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| stimulus | array | An array of length 2 containing the HTML-formatted content that the subject saw for each trial. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. | +| correct | boolean | `true` if the subject's response matched the `answer` for this trial. | +| answer | string | The correct answer to the trial, either `'same'` or `'different'`. | -Additionally, if `timing_first_stim` is null, then the following data is also collected: +Additionally, if `first_stim_duration` is null, then the following data is also collected: -Name | Type | Value ------|------|------ -rt_stim1 | numeric | The response time in milliseconds for the subject to continue after the first stimulus. The time is measured from when the first stimulus appears on the screen until the subject's response. -key_press_stim1 | numeric | Indicates which key the subject pressed to continue. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. +| Name | Type | Value | +| --------------- | ------- | ---------------------------------------- | +| rt_stim1 | numeric | The response time in milliseconds for the subject to continue after the first stimulus. The time is measured from when the first stimulus appears on the screen until the subject's response. | +| response_stim1 | string | Indicates which key the subject pressed to continue. | ## Examples @@ -43,11 +43,11 @@ key_press_stim1 | numeric | Indicates which key the subject pressed to continue. ```javascript var trial = { - type: 'same-different', + type: 'same-different-html', stimuli: ['

Climbing

', '

Walking

'], - prompt: "

Press S if the texts imply the same amount of physical exertion. Press D if the texts imply different amount of physical exertion.

", - same_key: 'S', - different_key: 'D', + prompt: "

Press 's' if the texts imply the same amount of physical exertion. Press 'd' if the texts imply different amount of physical exertion.

", + same_key: 's', + different_key: 'd', answer: 'different' } ``` diff --git a/docs/plugins/jspsych-same-different-image.md b/docs/plugins/jspsych-same-different-image.md index e1cad9184b..d066b6259c 100644 --- a/docs/plugins/jspsych-same-different-image.md +++ b/docs/plugins/jspsych-same-different-image.md @@ -4,38 +4,38 @@ The same-different plugin displays two stimuli sequentially. Stimuli are image o ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimuli | array | *undefined* | A pair of stimuli, represented as an array with two entries, one for each stimulus. The stimulus is a path to an image file. Stimuli will be shown in the order that they are defined in the array. -answer | string | *undefined* | Either `'same'` or `'different'`. -same_key | numeric or string | 'Q' | The key that subjects should press to indicate that the two stimuli are the same. -different_key | numeric or string | 'P' | The key that subjects should press to indicate that the two stimuli are different. -timing_first_stim | numeric | 1000 | How long to show the first stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject presses any key. -timing_gap | numeric | 500 | How long to show a blank screen in between the two stimuli. -timing_second_stim | numeric | 1000 | How long to show the second stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject responds. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). +| Parameter | Type | Default Value | Description | +| -------------------- | ------- | ------------- | ---------------------------------------- | +| stimuli | array | *undefined* | A pair of stimuli, represented as an array with two entries, one for each stimulus. The stimulus is a path to an image file. Stimuli will be shown in the order that they are defined in the array. | +| answer | string | *undefined* | Either `'same'` or `'different'`. | +| same_key | string | 'q' | The key that subjects should press to indicate that the two stimuli are the same. | +| different_key | string | 'p' | The key that subjects should press to indicate that the two stimuli are different. | +| first_stim_duration | numeric | 1000 | How long to show the first stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject presses any key. | +| gap_duration | numeric | 500 | How long to show a blank screen in between the two stimuli. | +| second_stim_duration | numeric | 1000 | How long to show the second stimulus for in milliseconds. If the value of this parameter is null then the stimulus will be shown until the subject responds. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | string | An JSON-encoded array of length 2 containing either the path to the image file or the string containing the HTML formatted content that the subject saw for each trial. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. -correct | boolean | `true` if the subject's response matched the `answer` for this trial. -answer | string | The correct answer to the trial, either `'same'` or `'different'`. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| stimulus | array | An array of length 2 containing the paths to the image files that the subject saw for each trial. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. | +| correct | boolean | `true` if the subject's response matched the `answer` for this trial. | +| answer | string | The correct answer to the trial, either `'same'` or `'different'`. | -Additionally, if `timing_first_stim` is null, then the following data is also collected: +Additionally, if `first_stim_duration` is null, then the following data is also collected: -Name | Type | Value ------|------|------ -rt_stim1 | numeric | The response time in milliseconds for the subject to continue after the first stimulus. The time is measured from when the first stimulus appears on the screen until the subject's response. -key_press_stim1 | numeric | Indicates which key the subject pressed to continue. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. +| Name | Type | Value | +| --------------- | ------- | ---------------------------------------- | +| rt_stim1 | numeric | The response time in milliseconds for the subject to continue after the first stimulus. The time is measured from when the first stimulus appears on the screen until the subject's response. | +| response_stim1 | string | Indicates which key the subject pressed to continue. | ## Examples @@ -45,9 +45,9 @@ key_press_stim1 | numeric | Indicates which key the subject pressed to continue. var block = { type: 'same-different-image', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_3.jpg'], - prompt: "

Press S if the faces had the same emotional expression. Press D if the faces had different emotional expressions.

", - same_key: 'S', - different_key: 'D', + prompt: "

Press s if the faces had the same emotional expression. Press d if the faces had different emotional expressions.

", + same_key: 's', + different_key: 'd', answer: 'different' } ``` @@ -58,9 +58,9 @@ var block = { var block = { type: 'same-different-image', stimuli: ['img/happy_face_1.jpg', 'img/happy_face_3.jpg'], - prompt: "

Press S if the faces had the same emotional expression. Press D if the faces had different emotional expressions.

", - same_key: 'S', - different_key: 'D', + prompt: "

Press s if the faces had the same emotional expression. Press d if the faces had different emotional expressions.

", + same_key: 's', + different_key: 'd', answer: 'same' } ``` diff --git a/docs/plugins/jspsych-serial-reaction-time-mouse.md b/docs/plugins/jspsych-serial-reaction-time-mouse.md index c79e5a42fb..fb335640cd 100644 --- a/docs/plugins/jspsych-serial-reaction-time-mouse.md +++ b/docs/plugins/jspsych-serial-reaction-time-mouse.md @@ -4,30 +4,32 @@ The serial reaction time mouse plugin implements a generalized version of the SR ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -target | array | *undefined* | The location of the target. The array should be the `[row, column]` of the target. -grid | array | `[[1,1,1,1]]` | This array represents the grid of boxes shown on the screen. Each inner array represents a single row. The entries in the inner arrays represent the columns. If an entry is `1` then a square will be drawn at that location on the grid. If an entry is `0` then the corresponding location on the grid will be empty. Thus, by mixing `1`s and `0`s it is possible to create many different grid-based arrangements. -grid_square_size | numeric | 100 | The width and height in pixels of each square in the grid. -target_color | hex color code | `#999` | The color of the target square. -response_ends_trial | boolean | `true` | If true, the trial ends after a key press. Feedback is displayed if `show_response_feedback` is true. -pre_target_duration | numeric | 0 | The number of milliseconds to display the grid *before* the target changes color. -trial_duration | numeric | null | The maximum length of time of the trial, not including feedback. -fade_duration | numeric | null | If a positive number, the target will progressively change color at the start of the trial, with the transition lasting this many milliseconds. -allow_nontarget_responses | boolean | false | If true, the user can make nontarget response. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which keys to press). +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------------------- | -------------- | ------------- | ---------------------------------------- | +| target | array | *undefined* | The location of the target. The array should be the `[row, column]` of the target. | +| grid | array | `[[1,1,1,1]]` | This array represents the grid of boxes shown on the screen. Each inner array represents a single row. The entries in the inner arrays represent the columns. If an entry is `1` then a square will be drawn at that location on the grid. If an entry is `0` then the corresponding location on the grid will be empty. Thus, by mixing `1`s and `0`s it is possible to create many different grid-based arrangements. | +| grid_square_size | numeric | 100 | The width and height in pixels of each square in the grid. | +| target_color | hex color code | `#999` | The color of the target square. | +| response_ends_trial | boolean | `true` | If true, the trial ends after a mouse click. Feedback is displayed if `show_response_feedback` is true. | +| pre_target_duration | numeric | 0 | The number of milliseconds to display the grid *before* the target changes color. | +| trial_duration | numeric | null | The maximum length of time of the trial, not including feedback. | +| fade_duration | numeric | null | If a positive number, the target will progressively change color at the start of the trial, with the transition lasting this many milliseconds. | +| allow_nontarget_responses | boolean | false | If true, the user can make nontarget response. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which keys to press). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -grid | JSON | A JSON-encoded representation of the grid. -target | JSON | A JSON-encoded representation of the target on the grid. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. +| Name | Type | Value | +| ------ | ------- | ---------------------------------------- | +| grid | array | The grid representation. Each inner array represents a single row. The entries in the inner arrays represent the columns. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| target | array | The `[row, column]` target location on the grid. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. | +| response | array | The `[row, column]` response location on the grid. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| correct | boolean | Whether the response location matches the target location (`true`) or not (`false`)l ## Examples @@ -43,7 +45,7 @@ var trial = { #### 2x2 grid with extra space in the middle ```javascript var trial = { - type: 'serial-reaction-time', + type: 'serial-reaction-time-mouse', grid: [[1,0,1],[0,0,0],[1,0,1]], target: [0,2] } diff --git a/docs/plugins/jspsych-serial-reaction-time.md b/docs/plugins/jspsych-serial-reaction-time.md index 6baafdb750..fd39ea8125 100644 --- a/docs/plugins/jspsych-serial-reaction-time.md +++ b/docs/plugins/jspsych-serial-reaction-time.md @@ -4,34 +4,34 @@ The serial reaction time plugin implements a generalized version of the SRT task ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -target | array | *undefined* | The location of the target. The array should be the `[row, column]` of the target. -grid | array | `[[1,1,1,1]]` | This array represents the grid of boxes shown on the screen. Each inner array represents a single row. The entries in the inner arrays represent the columns. If an entry is `1` then a square will be drawn at that location on the grid. If an entry is `0` then the corresponding location on the grid will be empty. Thus, by mixing `1`s and `0`s it is possible to create many different grid-based arrangements. -choices | array | `[['3','5','7','9']]` | The dimensions of this array must match the dimensions of `grid`. Each entry in this array is the key that should be pressed for that corresponding location in the grid. Entries can be left blank if there is no key associated with that location of the grid. -grid_square_size | numeric | 100 | The width and height in pixels of each square in the grid. -target_color | hex color code | `#999` | The color of the target square. -response_ends_trial | boolean | `true` | If true, the trial ends after a key press. Feedback is displayed if `show_response_feedback` is true. -pre_target_duration | numeric | 0 | The number of milliseconds to display the grid *before* the target changes color. -trial_duration | numeric | null | The maximum length of time of the trial, not including feedback. -show_response_feedback | boolean | false | If true, show feedback indicating where the user responded and whether it was correct. -feedback_duration | numeric | 200 |The length of time in milliseconds to show the feedback. -fade_duration | numeric | null | If a positive number, the target will progressively change color at the start of the trial, with the transition lasting this many milliseconds. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which keys to press). +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ---------------------- | ---------------- | --------------------- | ---------------------------------------- | +| target | array | *undefined* | The location of the target. The array should be the `[row, column]` of the target. | +| grid | array | `[[1,1,1,1]]` | This array represents the grid of boxes shown on the screen. Each inner array represents a single row. The entries in the inner arrays represent the columns. If an entry is `1` then a square will be drawn at that location on the grid. If an entry is `0` then the corresponding location on the grid will be empty. Thus, by mixing `1`s and `0`s it is possible to create many different grid-based arrangements. | +| choices | array of strings | `[['3','5','7','9']]` | The dimensions of this array must match the dimensions of `grid`. Each entry in this array is the key that should be pressed for that corresponding location in the grid. Entries can be left blank if there is no key associated with that location of the grid. | +| grid_square_size | numeric | 100 | The width and height in pixels of each square in the grid. | +| target_color | hex color code | `#999` | The color of the target square. | +| response_ends_trial | boolean | `true` | If true, the trial ends after a key press. Feedback is displayed if `show_response_feedback` is true. | +| pre_target_duration | numeric | 0 | The number of milliseconds to display the grid *before* the target changes color. | +| trial_duration | numeric | null | The maximum length of time of the trial, not including feedback. | +| show_response_feedback | boolean | false | If true, show feedback indicating where the user responded and whether it was correct. | +| feedback_duration | numeric | 200 | The length of time in milliseconds to show the feedback. | +| fade_duration | numeric | null | If a positive number, the target will progressively change color at the start of the trial, with the transition lasting this many milliseconds. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which keys to press). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -grid | JSON | A JSON-encoded representation of the grid. -target | JSON | A JSON-encoded representation of the target on the grid. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. -correct | boolean | `true` if the subject's response matched the target. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| grid | array | The representation of the grid. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| target | array | The representation of the target location on the grid. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the second stimulus first appears on the screen until the subject's response. | +| correct | boolean | `true` if the subject's response matched the target. | ## Examples diff --git a/docs/plugins/jspsych-survey-html-form.md b/docs/plugins/jspsych-survey-html-form.md index 60785f8ae7..317d6fc2be 100644 --- a/docs/plugins/jspsych-survey-html-form.md +++ b/docs/plugins/jspsych-survey-html-form.md @@ -4,7 +4,7 @@ The survey-html-form plugin displays a set of `` from a HTML string. The ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -12,15 +12,17 @@ html | string | *undefined* | HTML formatted string containing all the input ele preamble | string | empty string | HTML formatted string to display at the top of the page above all the questions. button_label | string | 'Continue' | The text that appears on the button to finish the trial. dataAsArray | boolean | false | Retrieve the data as an array e.g. [{name: "INPUT_NAME", value: "INPUT_VALUE"}, ...] instead of an object e.g. {INPUT_NAME: INPUT_VALUE, ...}. This might be useful if you omit naming your inputs. +autofocus | string | empty string | The HTML element ID of a form field to autofocus on. The focused element is the element that will receive keyboard events. For elements like `` or ``, autofocus means that the cursor will appear in the text input area when the trial loads. +autocomplete | boolean | false | This determines whether or not all of the input elements on the page should allow autocomplete. Setting this to true will enable autocomplete or auto-fill for the form. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -responses | string | A JS object encoded in JSON format containing the response for each input. The encoded object will have a separate variable for the response to each input, with each variable being named after its corresponding input element. Each response is a string containing whatever the subject answered for this particular input. -rt | numeric | The response time in milliseconds for the subject to make a response. +response | object | An object containing the response for each input. The object will have a separate key (variable) for the response to each input, with each variable being named after its corresponding input element. Each response is a string containing whatever the subject answered for this particular input. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +rt | numeric | The response time in milliseconds for the subject to make a response. | ## Examples @@ -33,3 +35,16 @@ var form_trial = { html: '

I am feeling , , and .

' }; ``` + +### Example using the autofocus parameter + +In this example, the browser will focus on the element with the ID `test-resp-box` when the trial loads. For `` elements, this means that the cursor will appear inside the text box. + +```javascript +var autofocus_trial = { + type: 'survey-html-form', + preamble: '

What is your favorite bird?

', + html: '

My favorite bird is

', + autofocus: 'test-resp-box' +}; +``` \ No newline at end of file diff --git a/docs/plugins/jspsych-survey-likert.md b/docs/plugins/jspsych-survey-likert.md index 58dca7809d..07adb51669 100644 --- a/docs/plugins/jspsych-survey-likert.md +++ b/docs/plugins/jspsych-survey-likert.md @@ -4,7 +4,7 @@ The survey-likert plugin displays a set of questions with Likert scale responses ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -13,16 +13,17 @@ randomize_question_order | boolean | `false` | If true, the display order of `qu preamble | string | empty string | HTML formatted string to display at the top of the page above all the questions. scale_width | numeric | null | The width of the likert scale in pixels. If left `null`, then the width of the scale will be equal to the width of the widest content on the page. button_label | string | 'Continue' | Label of the button. +autocomplete | boolean | false | This determines whether or not all of the input elements on the page should allow autocomplete. Setting this to true will enable autocomplete or auto-fill for the form. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -responses | JSON string | A string in JSON format containing the response for each question. The encoded object will have a separate variable for the response to each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. The responses are recorded as integers, representing the position of the slider on the scale. If the `name` parameter is defined for the question, then the response will use the value of `name` as the key for the response in the `responses` object. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the questions first appear on the screen until the subject's response. -question_order | JSON string | A string in JSON format containing an array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. +response | object | An object containing the response for each question. The object will have a separate key (variable) for each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. The responses are recorded as integers, representing the position selected on the likert scale for that question. If the `name` parameter is defined for the question, then the response object will use the value of `name` as the key for each question. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the questions first appear on the screen until the subject's response(s) are submitted. | +question_order | array | An array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ## Examples diff --git a/docs/plugins/jspsych-survey-multi-choice.md b/docs/plugins/jspsych-survey-multi-choice.md index 39afe39a04..32eadb598d 100644 --- a/docs/plugins/jspsych-survey-multi-choice.md +++ b/docs/plugins/jspsych-survey-multi-choice.md @@ -4,7 +4,7 @@ The survey-multi-choice plugin displays a set of questions with multiple choice ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -12,22 +12,23 @@ questions | array | *undefined* | An array of objects, each object represents a randomize_question_order | boolean | `false` | If true, the display order of `questions` is randomly determined at the start of the trial. In the data object, `Q0` will still refer to the first question in the array, regardless of where it was presented visually. preamble | string | empty string | HTML formatted string to display at the top of the page above all the questions. button_label | string | 'Continue' | Label of the button. +autocomplete | boolean | false | This determines whether or not all of the input elements on the page should allow autocomplete. Setting this to true will enable autocomplete or auto-fill for the form. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -responses | JSON string | A string in JSON format containing the response for each question. The encoded object will have a separate variable for the response to each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. The responses are recorded as the name of the option label. If the `name` parameter is defined for the question, then the response will use the value of `name` as the key for the response in the `responses` object. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the questions first appear on the screen until the subject's response. -question_order | JSON string | A string in JSON format containing an array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. +response | object | An object containing the response for each question. The object will have a separate key (variable) for each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. The responses are recorded as the name of the option label selected (string). If the `name` parameter is defined for the question, then the response object will use the value of `name` as the key for each question. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the questions first appear on the screen until the subject's response(s) are submitted. | +question_order | array | An array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ## Examples ```javascript var page_1_options = ["Strongly Disagree", "Disagree", "Neutral", "Agree", "Strongly Agree"]; -var page_2_options = ["Strongly Disagree", "Disagree", "Somewhat Disagree", "Neural", "Somewhat Agree", "Agree", "Strongly Agree"]; +var page_2_options = ["Strongly Disagree", "Disagree", "Somewhat Disagree", "Neutral", "Somewhat Agree", "Agree", "Strongly Agree"]; var multi_choice_block = { type: 'survey-multi-choice', diff --git a/docs/plugins/jspsych-survey-multi-select.md b/docs/plugins/jspsych-survey-multi-select.md index f9e17b2ee1..1e5b4c75f0 100644 --- a/docs/plugins/jspsych-survey-multi-select.md +++ b/docs/plugins/jspsych-survey-multi-select.md @@ -4,7 +4,7 @@ The survey-multi-select plugin displays a set of questions with multiple select ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ @@ -13,16 +13,17 @@ randomize_question_order | boolean | `false` | If true, the display order of `qu preamble | string | empty string | HTML formatted string to display at the top of the page above all the questions. button_label | string | 'Continue' | Label of the button. required_message | string | 'You must choose at least one response for this question' | Message to display if required response is not given. +autocomplete | boolean | false | This determines whether or not all of the input elements on the page should allow autocomplete. Setting this to true will enable autocomplete or auto-fill for the form. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -responses | JSON string | An array containing all selected choices in JSON format for each question. The encoded object will have a separate variable for the response to each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. The responses are recorded as the name of the option label. If the `name` parameter is defined for the question, then the response will use the value of `name` as the key for the response in the `responses` object. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the questions first appear on the screen until the subject's response. -question_order | JSON string | A string in JSON format containing an array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. +response | object | An object containing the response for each question. The object will have a separate key (variable) for each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. For each question, the responses are recorded as arrays containing any response options that were selected (strings). If the `name` parameter is defined for the question, then the response object will use the value of `name` as the key for each question. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the questions first appear on the screen until the subject's response(s) were submitted. | +question_order | array | An array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ## Examples diff --git a/docs/plugins/jspsych-survey-text.md b/docs/plugins/jspsych-survey-text.md index 05d947d77a..5a10708826 100644 --- a/docs/plugins/jspsych-survey-text.md +++ b/docs/plugins/jspsych-survey-text.md @@ -4,24 +4,25 @@ The survey-text plugin displays a set of questions with free response text field ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ -questions | array | *undefined* | An array of objects, each object represents a question that appears on the screen. Each object contains a prompt, value, required, rows, and columns parameter that will be applied to the question. See examples below for further clarification. `prompt`: Type string, default value of *undefined*. The string is the prompt for the subject to respond to. Each question gets its own response field. `placeholder`: Type string, default value of `""`. The string will create placeholder text in the text field. `required`: Boolean; if `true` then the user must enter a response to submit. `rows`: Type integer, default value of 1. The number of rows for the response text box. `columns`: Type integer, default value of 40. The number of columns for the response text box. `name`: Name of the question. Used for storing data. If left undefined then default names (`Q0`, `Q1`, `...`) will be used for the questions. +questions | array | *undefined* | An array of objects, each object represents a question that appears on the screen. Each object contains a prompt, value, required, rows, and columns parameter that will be applied to the question. See examples below for further clarification. `prompt`: Type string, default value of *undefined*. The string is the prompt for the subject to respond to. Each question gets its own response field. `value`: Type string, default value of `""`. The string will create placeholder text in the text field. `required`: Boolean; if `true` then the user must enter a response to submit. `rows`: Type integer, default value of 1. The number of rows for the response text box. `columns`: Type integer, default value of 40. The number of columns for the response text box. `name`: Name of the question. Used for storing data. If left undefined then default names (`Q0`, `Q1`, `...`) will be used for the questions. randomize_question_order | boolean | `false` | If true, the display order of `questions` is randomly determined at the start of the trial. In the data object, `Q0` will still refer to the first question in the array, regardless of where it was presented visually. preamble | string | empty string | HTML formatted string to display at the top of the page above all the questions. button_label | string | 'Continue' | The text that appears on the button to finish the trial. +autocomplete | boolean | false | This determines whether or not all of the input elements on the page should allow autocomplete. Setting this to true will enable autocomplete or auto-fill for the form. ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -responses | JSON string | A string in JSON format containing the response for each question. The encoded object will have a separate variable for the response to each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. Each response is a string containing whatever the subject typed into the associated text box. If the `name` parameter is defined for the question, then the response will use the value of `name` as the key for the response in the `responses` object. -rt | numeric | The response time in milliseconds for the subject to make a response. -question_order | JSON string | A string in JSON format containing an array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. +response | object | An object containing the response for each question. The object will have a separate key (variable) for each question, with the first question in the trial being recorded in `Q0`, the second in `Q1`, and so on. For each question, the response is a string containing whatever text was in the response box when the responses were submitted. If the `name` parameter is defined for the question, then the response object will use the value of `name` as the key for each question. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the questions first appear on the screen until the subject's response(s) were submitted. | +question_order | array | An array with the order of questions. For example `[2,0,1]` would indicate that the first question was `trial.questions[2]` (the third item in the `questions` parameter), the second question was `trial.questions[0]`, and the final question was `trial.questions[1]`. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ## Examples diff --git a/docs/plugins/jspsych-video-button-response.md b/docs/plugins/jspsych-video-button-response.md index b136970f66..50618bc737 100644 --- a/docs/plugins/jspsych-video-button-response.md +++ b/docs/plugins/jspsych-video-button-response.md @@ -1,15 +1,17 @@ # jspsych-video-button-response plugin -This plugin plays a video and records responses generated by button click. The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically if the subject has failed to respond within a fixed length of time. The button itself can be customized using HTML formatting. +This plugin plays a video and records responses generated by button click. The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically when the subject responds, when the video file has finished playing, or if the subject has failed to respond within a fixed length of time. You can also prevent a button response from being made before the video has finished playing. The button itself can be customized using HTML formatting. + +Video files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the video stimulus, you will need to [manually preload](/overview/media-preloading/#manual-preloading) the videos. Also note that video preloading is disabled when the experiment is running as a file (i.e. opened directly in the browser, rather than through a server), in order to prevent CORS errors - see the section on [Running Experiments](/overview/running-experiments.md) for more information. ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ -sources | array | *undefined* | An array of file paths to the video. You can specify multiple formats of the same video (e.g., .mp4, .ogg, .webm) to maximize the [cross-browser compatibility](https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats). Usually .mp4 is a safe cross-browser option. The player will use the first source file in the array that is compatible with the browser, so specify the files in order of preference. -choices | array of strings | [] | Labels for the buttons. Each different string in the array will generate a different button. +stimulus | array | *undefined* | An array of file paths to the video. You can specify multiple formats of the same video (e.g., .mp4, .ogg, .webm) to maximize the [cross-browser compatibility](https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats). Usually .mp4 is a safe cross-browser option. The plugin does not reliably support .mov files. The player will use the first source file in the array that is compatible with the browser, so specify the files in order of preference. +choices | array of strings | *undefined* | Labels for the buttons. Each different string in the array will generate a different button. button_html | HTML string | `''` | A template of HTML for generating the button elements. You can override this to create customized buttons of various kinds. The string `%choice%` will be changed to the corresponding element of the `choices` array. You may also specify an array of strings, if you need different HTML to render for each button. If you do specify an array, the `choices` array and this array must have the same length. The HTML from position 0 in the `button_html` array will be used to create the button for element 0 in the `choices` array, and so on. margin_vertical | string | '0px' | Vertical margin of the button(s). margin_horizontal | string | '8px' | Horizontal margin of the button(s). @@ -23,25 +25,26 @@ stop| numeric | null | If given a value, the video will stop at this time point rate | numeric | null | The playback rate of the video. 1 is normal, <1 is slower, >1 is faster. trial_ends_after_video | bool | false | If true, then the trial will end as soon as the video file finishes playing. trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_allowed_while_playing | boolean | true | If true, then responses are allowed while the video is playing. If false, then the video must finish playing before the button choices are enabled and a response is accepted. Once the video has played all the way through, the buttons are enabled and a response is allowed (including while the video is being re-played via on-screen playback controls). ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ -button_pressed | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -stimulus | string | JSON encoding of the `sources` array. +response | numeric | Indicates which button the subject pressed. The first button in the `choices` array is 0, the second is 1, and so on. +rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. +stimulus | array | The `stimulus` array. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. ## Example ```javascript var trial = { - type: 'video-keyboard-response', - sources: [ + type: 'video-button-response', + stimulus: [ 'video/sample_video.mp4', 'video/sample_video.ogg' ], diff --git a/docs/plugins/jspsych-video-keyboard-response.md b/docs/plugins/jspsych-video-keyboard-response.md index 3c1ee1fc59..e450f01aad 100644 --- a/docs/plugins/jspsych-video-keyboard-response.md +++ b/docs/plugins/jspsych-video-keyboard-response.md @@ -1,44 +1,46 @@ # jspsych-video-keyboard-response plugin -This plugin plays a video file and records a keyboard response. Various aspects of the timing, video playback, and keyboard options can be controlled through parameters. +This plugin plays a video file and records a keyboard response. The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically when the subject responds, when the video file has finished playing, or if the subject has failed to respond within a fixed length of time. You can also prevent a keyboard response from being recorded before the video has finished playing. -## Parameters +Video files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the video stimulus, you will need to [manually preload](/overview/media-preloading/#manual-preloading) the videos. Also note that video preloading is disabled when the experiment is running as a file (i.e. opened directly in the browser, rather than through a server), in order to prevent CORS errors - see the section on [Running Experiments](/overview/running-experiments.md) for more information. -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -sources | array | *undefined* | An array of file paths to the video. You can specify multiple formats of the same video (e.g., .mp4, .ogg, .webm) to maximize the [cross-browser compatibility](https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats). Usually .mp4 is a safe cross-browser option. The player will use the first source file in the array that is compatible with the browser, so specify the files in order of preference. -prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). -width | numeric | width of the video file | The width of the video display in pixels. -height | numeric | heigh of the video file | The height of the video display in pixels. -autoplay | boolean | true | If true, the video will begin playing as soon as it has loaded. -controls | boolean | false | If true, controls for the video player will be available to the subject. They will be able to pause the video or move the playback to any point in the video. -start | numeric | null | If given a value, the video will start at this time point in seconds. -stop| numeric | null | If given a value, the video will stop at this time point in seconds. -rate | numeric | null | The playback rate of the video. 1 is normal, <1 is slower, >1 is faster. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -trial_ends_after_video | bool | false | If true, then the trial will end as soon as the video file finishes playing. -trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +## Parameters +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------------------------ | ---------------- | ----------------------- | ---------------------------------------- | +| stimulus | array | *undefined* | An array of file paths to the video. You can specify multiple formats of the same video (e.g., .mp4, .ogg, .webm) to maximize the [cross-browser compatibility](https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats). Usually .mp4 is a safe cross-browser option. The plugin does not reliably support .mov files. The player will use the first source file in the array that is compatible with the browser, so specify the files in order of preference. | +| prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). | +| width | numeric | width of the video file | The width of the video display in pixels. | +| height | numeric | heigh of the video file | The height of the video display in pixels. | +| autoplay | boolean | true | If true, the video will begin playing as soon as it has loaded. | +| controls | boolean | false | If true, controls for the video player will be available to the subject. They will be able to pause the video or move the playback to any point in the video. | +| start | numeric | null | If given a value, the video will start at this time point in seconds. | +| stop | numeric | null | If given a value, the video will stop at this time point in seconds. | +| rate | numeric | null | The playback rate of the video. 1 is normal, <1 is slower, >1 is faster. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| trial_ends_after_video | bool | false | If true, then the trial will end as soon as the video file finishes playing. | +| trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. | +| response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. | +| response_allowed_while_playing | boolean | true | If true, then responses are allowed while the video is playing. If false, then the video must finish playing before a keyboard response is accepted. Once the video has played all the way through, a valid keyboard response is allowed (including while the video is being re-played via on-screen playback controls). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -stimulus | string | JSON encoding of the `sources` array. +| Name | Type | Value | +| --------- | ------- | ---------------------------------------- | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +stimulus | array | The `stimulus` array. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ## Example ```javascript var trial = { type: 'video-keyboard-response', - sources: [ + stimulus: [ 'video/sample_video.mp4', 'video/sample_video.ogg' ], diff --git a/docs/plugins/jspsych-video-slider-response.md b/docs/plugins/jspsych-video-slider-response.md index fb7ffe73b1..c0031fcbb8 100644 --- a/docs/plugins/jspsych-video-slider-response.md +++ b/docs/plugins/jspsych-video-slider-response.md @@ -1,14 +1,16 @@ # jspsych-video-slider-response plugin -This plugin plays a video and allows the subject to respond by dragging a slider. +This plugin plays a video and allows the subject to respond by dragging a slider. The stimulus can be displayed until a response is given, or for a pre-determined amount of time. The trial can be ended automatically when the subject responds, when the video file has finished playing, or if the subject has failed to respond within a fixed length of time. You can also prevent the slider response from being made before the video has finished playing. + +Video files can be automatically preloaded by jsPsych using the [`preload` plugin](jspsych-preload.md). However, if you are using timeline variables or another dynamic method to specify the video stimulus, you will need to [manually preload](/overview/media-preloading/#manual-preloading) the videos. Also note that video preloading is disabled when the experiment is running as a file (i.e. opened directly in the browser, rather than through a server), in order to prevent CORS errors - see the section on [Running Experiments](/overview/running-experiments.md) for more information. ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. Parameter | Type | Default Value | Description ----------|------|---------------|------------ -sources | array | *undefined* | An array of file paths to the video. You can specify multiple formats of the same video (e.g., .mp4, .ogg, .webm) to maximize the [cross-browser compatibility](https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats). Usually .mp4 is a safe cross-browser option. The player will use the first source file in the array that is compatible with the browser, so specify the files in order of preference. +stimulus | array | *undefined* | An array of file paths to the video. You can specify multiple formats of the same video (e.g., .mp4, .ogg, .webm) to maximize the [cross-browser compatibility](https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats). Usually .mp4 is a safe cross-browser option. The plugin does not reliably support .mov files. The player will use the first source file in the array that is compatible with the browser, so specify the files in order of preference. prompt | string | null | This string can contain HTML markup. Any content here will be displayed below the stimulus. The intention is that it can be used to provide a reminder about the action the subject is supposed to take (e.g., which key to press). width | numeric | width of the video file | The width of the video display in pixels. height | numeric | heigh of the video file | The height of the video display in pixels. @@ -21,30 +23,34 @@ min | integer | 0 | Sets the minimum value of the slider. max | integer | 100 | Sets the maximum value of the slider. slider_start | integer | 50 | Sets the starting value of the slider step | integer | 1 | Sets the step of the slider. This is the smallest amount by which the slider can change. +labels | array of strings | [] | Labels displayed at equidistant locations on the slider. For example, two labels will be placed at the ends of the slider. Three labels would place two at the ends and one in the middle. Four will place two at the ends, and the other two will be at 33% and 67% of the slider width. slider_width | integer | null | Set the width of the slider in pixels. If left null, then the width will be equal to the widest element in the display. require_movement | boolean | false | If true, the subject must move the slider before clicking the continue button. button_label | string | 'Continue' | Label of the button to end the trial. trial_ends_after_video | bool | false | If true, then the trial will end as soon as the video file finishes playing. trial_duration | numeric | null | How long to wait for the subject to make a response before ending the trial in milliseconds. If the subject fails to make a response before this timer is reached, the subject's response will be recorded as null for the trial and the trial will end. If the value of this parameter is null, then the trial will wait for a response indefinitely. -response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `timing_response` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can use this parameter to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_ends_trial | boolean | true | If true, then the trial will end whenever the subject makes a response (assuming they make their response before the cutoff specified by the `trial_duration` parameter). If false, then the trial will continue until the value for `trial_duration` is reached. You can set this parameter to `false` to force the subject to view a stimulus for a fixed amount of time, even if they respond before the time is complete. +response_allowed_while_playing | boolean | true | If true, then responses are allowed while the video is playing. If false, then the video must finish playing before the slider is enabled and the trial can end via the next button click. Once the video has played all the way through, the slider is enabled and a response is allowed (including while the video is being re-played via on-screen playback controls). ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. Name | Type | Value -----|------|------ response | numeric | The numeric value of the slider. rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -stimulus | string | JSON encoding of the `sources` array. +stimulus | array | The `stimulus` array. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. +slider_start | numeric | The starting value of the slider. +start | numeric | The start time of the video clip. ## Example ```javascript var trial = { type: 'video-slider-response', - sources: [ + stimulus: [ 'video/sample_video.mp4', 'video/sample_video.ogg' ], diff --git a/docs/plugins/jspsych-virtual-chinrest.md b/docs/plugins/jspsych-virtual-chinrest.md new file mode 100644 index 0000000000..3e8fea7f3b --- /dev/null +++ b/docs/plugins/jspsych-virtual-chinrest.md @@ -0,0 +1,105 @@ +# jspsych-virtual-chinrest + +This plugin provides a "virtual chinrest" that can measure the distance between the participant and the screen. It can also standardize the jsPsych page content to a known physical dimension (e.g., ensuring that a 200px wide stimulus is 2.2cm wide on the participant's monitor). This is based on the work of [Li, Joo, Yeatman, and Reinecke (2020)](https://doi.org/10.1038/s41598-019-57204-1), and the plugin code is a modified version of [their implementation](https://github.com/QishengLi/virtual_chinrest). We recommend citing their work in any paper that makes use of this plugin. + +!!! note "Citation" + Li, Q., Joo, S. J., Yeatman, J. D., & Reinecke, K. (2020). Controlling for Participants’ Viewing Distance in Large-Scale, Psychophysical Online Experiments Using a Virtual Chinrest. _Scientific Reports, 10_(1), 1-11. doi: [10.1038/s41598-019-57204-1](https://doi.org/10.1038/s41598-019-57204-1) + +The plugin works in two phases. + +**Phase 1**. To calculate the pixel-to-cm conversion rate for a participant’s display, participants are asked to place a credit card or other item of the same size on the screen and resize an image until it is the same size as the credit card. Since we know the physical dimensions of the card, we can find the conversion rate for the participant's display. + +**Phase 2**. To measure the participant's viewing distance from their screen we use a [blind spot]() task. Participants are asked to focus on a black square on the screen with their right eye closed, while a red dot repeatedly sweeps from right to left. They press the spacebar on their keyboard whenever they perceive that the red dot has disappeared. This part allows the plugin to use the distance between the black square and the red dot when it disappears from eyesight to estimate how far the participant is from the monitor. This estimation assumes that the blind spot is located at 13.5° temporally. + +## Dependency + +This plugin requires the SVG.js library, available at [https://svgjs.com](https://svgjs.com/docs/3.0/). You must include the library in the `` section of your experiment page. + +## Parameters + +Parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Descripton | +| ----------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| resize_units | string | "none" | Units to resize the jsPsych content to after the trial is over: `"none"` `"cm"` `"inch"` or `"deg"`. If `"none"`, no resizing will be done to the jsPsych content after the virtual-chinrest trial ends. | +| pixels_per_unit | numeric | 100 | After the scaling factor is applied, this many pixels will equal one unit of measurement, where the units are indicated by `resize_units`. This is only used when resizing is done after the trial ends (i.e. the `resize_units` parameter is not "none"). | +| adjustment_prompt | HTML string | "Click and drag the lower right corner of the image until it is the same size as a credit card held up to the screen. You can use any card that is the same size as a credit card, like a membership card or driver's license. If you do not have access to a real card you can use a ruler to measure the image width to 3.37 inches or 85.6 mm." | This string can contain HTML markup. Any content here will be displayed **below the card stimulus** during the resizing phase. | +| adjustment_button_prompt | HTML string | "Click here when the image is the correct size" | Content of the button displayed below the card stimulus during the resizing phase. | +| item_path | string | "img/card.png" | Path of the item to be presented in the card stimulus during the resizing phase. _The default image is available in `/examples/img/card.png`_ | +| item_height_mm | numeric | 53.98 | The known height of the physical item (e.g. credit card) to be measured, in mm. | +| item_width_mm | numeric | 85.6 | The known width of the physical item (e.g. credit card) to be measured, in mm. | +| item_init_size | numeric | 250 | The initial size of the card stimulus, in pixels, along its largest dimension. | +| blindspot_reps | numeric | 5 | How many times to measure the blindspot location. If `0`, blindspot will not be detected, and viewing distance and degree data will not be computed. | +| blindspot_prompt | HTML string | "Now we will quickly measure how far away you are sitting. Put your left hand on the space bar. Cover your right eye with your right hand. Using your left eye, focus on the black square. Keep your focus on the black square. The red ball will disappear as it moves from right to left. Press the space bar as soon as the ball disappears. Press the space bar when you are ready to begin. | This string can contain HTML markup. Any content here will be displayed **above the blindspot task**. | | +| redo_measurement_button_label | HTML string | 'No, that is not close. Try again' | Text for the button on the viewing distance report page to re-do the viewing distance estimate. If the participant click this button, the blindspot task starts again. | +| blindspot_done_prompt | HTML string | "Yes" | Text for the button on the viewing distance report page that can be clicked to accept the viewing distance estimate. | +| blindspot_measurements_prompt | HTML string | 'Remaining measurements: ' | Text accompanying the remaining measurements counter that appears below the blindspot task. | +| viewing_distance_report | HTML string | "Based on your responses, you are sitting about `` from the screen. Does that seem about right?" | Estimated viewing distance data displayed after blindspot task. If `"none"` is given, viewing distance will not be reported to the participant. The HTML `span` element with `id = distance-estimate` returns the distance. | + +## Data Generated + +In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. + +_Note: The deg data are **only** returned if viewing distance is estimated with the blindspot method (px2deg, win_height_deg, win_width_deg, item_width_deg)._ + +| Name | Type | Value | +| --------------- | ------- | -------------------------------------------------------------------------- | +| rt | numeric | The response time in milliseconds. | +| item_height_mm | numeric | The height in millimeters of the item to be measured. | +| item_width_mm | numeric | The width in millimeters of the item to be measured | +| item_height_deg | numeric | Final height of the resizable div container, in degrees. | +| item_width_deg | numeric | Final width of the resizable div container, in degrees. | +| item_width_px | numeric | Final width of the resizable div container, in pixels. | +| px2deg | numeric | Pixels to degrees conversion factor. | +| px2mm | numeric | Pixels to millimeters conversion factor. | +| scale_factor | numeric | Scaling factor that will be applied to the div containing jsPsych content. | +| win_width_deg | numeric | The interior width of the window in degrees. | +| win_height_deg | numeric | The interior height of the window in degrees. | +| view_dist_mm | numeric | Estimated distance to the screen in millimeters. | + +## Example + +```javascript +// two blindspot measurements +// measure px2mm, viewing distance and px2deg +// do not resize the jsPsych content after this trial +// note: pixels_per_unit will be ignored since there is no resizing (resize_units: "none") +let no_resize = { + type: "virtual-chinrest", + blindspot_reps: 3, + resize_units: "none", + pixels_per_unit: 50, +}; + +// no blindspot task +// resize to cm (50 pixels per unit) +// measure px2mm, but not viewing distance and px2deg (because blindspot_reps is 0) +// note: you may still choose to estimate viewing distance even if resizing to cm or inches +let cm_resize = { + type: "virtual-chinrest", + blindspot_reps: 0, + resize_units: "cm", + pixels_per_unit: 50, +}; + +// three blindspot measurements +// measure px2mm, viewing distance and px2deg +// resize to degrees of visual angle (50 pixels per unit) +// don't report viewing distance to subject +let deg_resize = { + type: "virtual-chinrest", + blindspot_reps: 3, + resize_units: "deg", + pixels_per_unit: 50, + viewing_distance_report: "none", +}; + +// resizing to degrees with no blindspot measurment is not possible +// this trial will throw an error +let error_trial = { + type: "virtual-chinrest", + blindspot_reps: 0, + resize_units: "deg", + pixels_per_unit: 50, +}; +``` diff --git a/docs/plugins/jspsych-visual-search-circle.md b/docs/plugins/jspsych-visual-search-circle.md index 0793d9786a..bb71ee6925 100644 --- a/docs/plugins/jspsych-visual-search-circle.md +++ b/docs/plugins/jspsych-visual-search-circle.md @@ -6,35 +6,35 @@ This plugin presents a customizable visual-search task modelled after [Wang, Cav ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -target_present | boolean | *undefined* | Is the target present? -set_size | numeric | *undefined* | How many items should be displayed? -target | string | *undefined* | Path to image file that is the search target. -foil | string or array | *undefined* | Path to image file that is the foil/distractor. Can specify an array of distractors if the distractors are all different images. -fixation_image | string | *undefined* | Path to image file that is a fixation target. -target_size | array | `[50, 50]` | Two element array indicating the height and width of the search array element images. -fixation_size | array | `[16, 16]` | Two element array indicating the height and width of the fixation image. -circle_diameter | numeric | 250 | The diameter of the search array circle in pixels. -target_present_key | numeric | 74 | The key to press if the target is present in the search array. -target_absent_key | numeric | 70 | The key to press if the target is not present in the search array. -trial_duration | numeric | null | The maximum amount of time the subject is allowed to search before the trial will continue. A value of null will allow the subject to search indefinitely. -fixation_duration | numeric | 1000 | How long to show the fixation image for before the search array (in milliseconds). +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| ------------------ | --------------- | ------------- | ---------------------------------------- | +| target_present | boolean | *undefined* | Is the target present? | +| set_size | numeric | *undefined* | How many items should be displayed? | +| target | string | *undefined* | Path to image file that is the search target. | +| foil | string or array | *undefined* | Path to image file that is the foil/distractor. Can specify an array of distractors if the distractors are all different images. | +| fixation_image | string | *undefined* | Path to image file that is a fixation target. | +| target_size | array | `[50, 50]` | Two element array indicating the height and width of the search array element images. | +| fixation_size | array | `[16, 16]` | Two element array indicating the height and width of the fixation image. | +| circle_diameter | numeric | 250 | The diameter of the search array circle in pixels. | +| target_present_key | string | 'j' | The key to press if the target is present in the search array. | +| target_absent_key | string | 'f' | The key to press if the target is not present in the search array. | +| trial_duration | numeric | null | The maximum amount of time the subject is allowed to search before the trial will continue. A value of null will allow the subject to search indefinitely. | +| fixation_duration | numeric | 1000 | How long to show the fixation image for before the search array (in milliseconds). | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -correct | boolean | True if the subject gave the correct response. -key_press | numeric | Indicates which key the subject pressed. The value is the [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) corresponding to the subject's response. -rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. -set_size | numeric | The number of items in the search array -target_present | boolean | True if the target is present in the search array -locations | JSON string | JSON-encoded array where each element of the array is the pixel value of the center of an image in the search array. If the target is present, then the first element will represent the location of the target. +| Name | Type | Value | +| -------------- | ----------- | ---------------------------------------- | +| correct | boolean | True if the subject gave the correct response. | +| response | string | Indicates which key the subject pressed. | +| rt | numeric | The response time in milliseconds for the subject to make a response. The time is measured from when the stimulus first appears on the screen until the subject's response. | +| set_size | numeric | The number of items in the search array | +| target_present | boolean | True if the target is present in the search array | +| locations | array | Array where each element is the pixel value of the center of an image in the search array. If the target is present, then the first element will represent the location of the target. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ## Example diff --git a/docs/plugins/jspsych-vsl-animate-occlusion.md b/docs/plugins/jspsych-vsl-animate-occlusion.md index 2f047ace29..61889967ee 100644 --- a/docs/plugins/jspsych-vsl-animate-occlusion.md +++ b/docs/plugins/jspsych-vsl-animate-occlusion.md @@ -10,27 +10,27 @@ This plugin requires the Snap.svg library, available at [http://www.snapsvg.io]( ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimuli | array | *undefined* | Each element of the array is a stimulus. A stimulus is a path to an image file. The order of stimuli in the array determines the order of the animation sequence. -canvas_size | array | `[400, 400]` | Array specifying the width and height of the area that the animation will display in. Stimuli will move to the edges of this area, so increasing the width without increasing the `timing_cycle` parameter will speed up the images. -image_size | array | `[100, 100]` | Array specifying the width and height of the images to show. The occluding rectangle will have a width equal to the width of image_size. -initial_direction | string | "left" | Which direction the stimulus should move first (subsequent directions will alternate). Choices are "left" or "right". -occlude_center | boolean | true | If true, display a rectangle in the center of the screen that is just wide enough to occlude the image completely as it passes behind. -choices | array of keycodes | `jsPsych.ALL_KEYS` | This array contains the keys that the subject is allowed to press in order to respond to the stimulus. Keys can be specified as their [numeric key code](http://www.cambiaresearch.com/articles/15/javascript-char-codes-key-codes) or as characters (e.g., `'a'`, `'q'`). The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. -cycle_duration | numeric | 1000 | How long it takes for a stimulus in the sequence to make a complete cycle (move to the edge and back to the center) in milliseconds. -pre_movement_duration | numeric | 500 | How long to wait before the stimuli starts moving from behind the center rectangle. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +| Parameter | Type | Default Value | Description | +| --------------------- | ---------------- | ------------------ | ---------------------------------------- | +| stimuli | array | *undefined* | Each element of the array is a stimulus. A stimulus is a path to an image file. The order of stimuli in the array determines the order of the animation sequence. | +| canvas_size | array | `[400, 400]` | Array specifying the width and height of the area that the animation will display in. Stimuli will move to the edges of this area, so increasing the width without increasing the `timing_cycle` parameter will speed up the images. | +| image_size | array | `[100, 100]` | Array specifying the width and height of the images to show. The occluding rectangle will have a width equal to the width of image_size. | +| initial_direction | string | "left" | Which direction the stimulus should move first (subsequent directions will alternate). Choices are "left" or "right". | +| occlude_center | boolean | true | If true, display a rectangle in the center of the screen that is just wide enough to occlude the image completely as it passes behind. | +| choices | array of strings | `jsPsych.ALL_KEYS` | This array contains the key(s) that the subject is allowed to press in order to respond to the stimulus. Keys should be specified as characters (e.g., `'a'`, `'q'`, `' '`, `'Enter'`, `'ArrowDown'`) - see [this page](https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent/key/Key_Values) and [this page (event.key column)](https://www.freecodecamp.org/news/javascript-keycode-list-keypress-event-key-codes/) for more examples. Any key presses that are not listed in the array will be ignored. The default value of `jsPsych.ALL_KEYS` means that all keys will be accepted as valid responses. Specifying `jsPsych.NO_KEYS` will mean that no responses are allowed. | +| cycle_duration | numeric | 1000 | How long it takes for a stimulus in the sequence to make a complete cycle (move to the edge and back to the center) in milliseconds. | +| pre_movement_duration | numeric | 500 | How long to wait before the stimuli starts moving from behind the center rectangle. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | JSON string | A JSON encoded array where each element of the array is a stimulus from the sequence, in the order that they were shown. -responses | JSON string | A JSON encoded array containing all response information. The encoded object is an array containing one element for each valid response. Each response item has three properties: `key` the key code of the response key, `stimulus` the index of the stimulus that was displayed when the response was made, and `rt` the response time measured since the start of the sequence. +| Name | Type | Value | +| --------- | ----------- | ---------------------------------------- | +| stimulus | array | Array where each element is a stimulus from the sequence, in the order that they were shown. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | +| response | array | Array containing all response information. Each element in the array is an object representing each valid response. Each response item has three properties: `key` the key that was pressed, `stimulus` the index of the stimulus that was displayed when the response was made, and `rt` the response time measured since the start of the sequence. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ## Examples diff --git a/docs/plugins/jspsych-vsl-grid-scene.md b/docs/plugins/jspsych-vsl-grid-scene.md index 66525971c0..2a104c969b 100644 --- a/docs/plugins/jspsych-vsl-grid-scene.md +++ b/docs/plugins/jspsych-vsl-grid-scene.md @@ -6,21 +6,21 @@ Fiser, J., & Aslin, R. N. (2001). Unsupervised statistical learning of higher-or ## Parameters -Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -stimuli | array | *undefined* | An array that defines a grid. Grids should be declared as two dimensional arrays in `[row][col]` order, with paths to image files in the locations where images are displayed, and 0 in blank spaces. See example below. -image_size | array | `[100, 100]` | Array specifying the width and height of the images to show. Grid cells will also be this size, with 10% padding. -timing_duration | numeric | 2000 | How long to show the stimulus for in milliseconds. +| Parameter | Type | Default Value | Description | +| -------------- | ------- | ------------- | ---------------------------------------- | +| stimuli | array | *undefined* | An array that defines a grid. Grids should be declared as two dimensional arrays in `[row][col]` order, with paths to image files in the locations where images are displayed, and 0 in blank spaces. See example below. | +| image_size | array | `[100, 100]` | Array specifying the width and height of the images to show. Grid cells will also be this size, with 10% padding. | +| trial_duration | numeric | 2000 | How long to show the stimulus for in milliseconds. | ## Data Generated -In addition to the [default data collected by all plugins](overview#datacollectedbyplugins), this plugin collects the following data for each trial. +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. -Name | Type | Value ------|------|------ -stimulus | JSON string | JSON encoded array of the stimulus shown on the trial. +| Name | Type | Value | +| -------- | ----------- | ---------------------------------------- | +| stimulus | array | Two dimensional array representing the stimulus shown on the trial. This will be encoded as a JSON string when data is saved using the `.json()` or `.csv()` functions. | ### Stimulus Creation Method diff --git a/docs/plugins/jspsych-webgazer-calibrate.md b/docs/plugins/jspsych-webgazer-calibrate.md new file mode 100644 index 0000000000..c33890f3fd --- /dev/null +++ b/docs/plugins/jspsych-webgazer-calibrate.md @@ -0,0 +1,61 @@ +# jspsych-webgazer-calibrate + +This plugin can be used to calibrate the [WebGazer extension](/extensions/jspsych-ext-webgazer). For a narrative description of eye tracking with jsPsych, see the [eye tracking overview](/overview/eye-tracking). + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +Parameter | Type | Default Value | Description +----------|------|---------------|------------ +calibration_points | array | `[[10,10], [10,50], [10,90], [50,10], [50,50], [50,90], [90,10], [90,50], [90,90]]` | Array of points in `[x,y]` coordinates. Specified as a percentage of the screen width and height, from the left and top edge. The default grid is 9 points. +calibration_mode | string | `'click'` | Can specify `click` to have subjects click on calibration points or `view` to have subjects passively watch calibration points. +repetitions_per_point | numeric | 1 | The number of times to repeat the sequence of calibration points. +point_size | numeric | 20 | Diameter of the calibration points in pixels. +randomize_calibration_order | bool | `false` | Whether to randomize the order of the calibration points. +time_to_saccade | numeric | 1000 | If `calibration_mode` is set to `view`, then this is the delay before calibrating after showing a point. Gives the participant time to fixate on the new target before assuming that the participant is looking at the target. +time_per_point | numeric | 1000 | If `calibration_mode` is set to `view`, then this is the length of time to show a point while calibrating. Note that if `click` calibration is used then the point will remain on the screen until clicked. + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +Name | Type | Value +-----|------|------ + +No data currently added by this plugin. Use the [webgazer-validate](/plugins/jspsych-webgazer-validate) plugin to measure the precision and accuracy of calibration. + +## Example + +#### Click-based calibration with 5 points + +```javascript +var calibration = { + type: 'webgazer-calibrate', + calibration_points: [[50,50], [25,25], [25,75], [75,25], [75,75]], + repetitions_per_point: 2, + randomize_calibration_order: true + } +``` + +### View-based calibration with 33 points, concentrated in the center + +```javascript + var calibration = { + type: 'webgazer-calibrate', + calibration_points: [ + [10,10],[10,50],[10,90], + [30,10],[30,50],[30,90], + [40,10],[40,30],[40,40],[40,45],[40,50],[40,55],[40,60],[40,70],[40,90], + [50,10],[50,30],[50,40],[50,45],[50,50],[50,55],[50,60],[50,70],[50,90], + [60,10],[60,30],[60,40],[60,45],[60,50],[60,55],[60,60],[60,70],[60,90], + [70,10],[70,50],[70,90], + [90,10],[90,50],[90,90] + ], + repetitions_per_point: 1, + randomize_calibration_order: true, + calibration_mode: 'view', + time_per_point: 500, + time_to_saccade: 1000 +} +``` diff --git a/docs/plugins/jspsych-webgazer-init-camera.md b/docs/plugins/jspsych-webgazer-init-camera.md new file mode 100644 index 0000000000..6233d47961 --- /dev/null +++ b/docs/plugins/jspsych-webgazer-init-camera.md @@ -0,0 +1,30 @@ +# jspsych-webgazer-init-camera + +This plugin initializes the camera and helps the participant center their face in the camera view for using the the [WebGazer extension](/extensions/jspsych-ext-webgazer). For a narrative description of eye tracking with jsPsych, see the [eye tracking overview](/overview/eye-tracking). + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +Parameter | Type | Default Value | Description +----------|------|---------------|------------ +instructions | string | too long to put here | Instructions for the participant to follow. +button_text | string | Continue | The text for the button that participants click to end the trial. + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +Name | Type | Value +-----|------|------ +load_time | numeric | The time it took for webgazer to initialize. This can be a long time in some situations, so this value is recorded for troubleshooting when participants are reporting difficulty. + +## Example + +#### Parameterless use + +```javascript +var init_camera = { + type: 'webgazer-init-camera' +} +``` diff --git a/docs/plugins/jspsych-webgazer-validate.md b/docs/plugins/jspsych-webgazer-validate.md new file mode 100644 index 0000000000..bb06068a59 --- /dev/null +++ b/docs/plugins/jspsych-webgazer-validate.md @@ -0,0 +1,44 @@ +# jspsych-webgazer-validate + +This plugin can be used to measure the accuracy and precision of gaze predictions made by the [WebGazer extension](/extensions/jspsych-ext-webgazer). For a narrative description of eye tracking with jsPsych, see the [eye tracking overview](/overview/eye-tracking). + +## Parameters + +In addition to the [parameters available in all plugins](/overview/plugins#parameters-available-in-all-plugins), this plugin accepts the following parameters. Parameters with a default value of *undefined* must be specified. Other parameters can be left unspecified if the default value is acceptable. + +Parameter | Type | Default Value | Description +----------|------|---------------|------------ +validation_points | array | `[[10,10], [10,50], [10,90], [50,10], [50,50], [50,90], [90,10], [90,50], [90,90]]` | Array of points in `[x,y]` coordinates. The default grid is 9 points. Meaning of coordinates controlled by `validation_point_coordinates` parameter. +validation_point_coordinates | string | `'percent'` | Can specify `percent` to have validation point coordinates specified in percentage of screen width and height, or `center-offset-pixels` to specify each point as the distance in pixels from the center of the screen. +roi_radius | numeric | 200 | Tolerance around the validation point in pixels when calculating the percent of gaze measurements within the acceptable range. +repetitions_per_point | numeric | 1 | The number of times to repeat the sequence of calibration points. +randomize_validation_order | bool | `false` | Whether to randomize the order of the validation points. +time_to_saccade | numeric | 1000 | The delay before validating after showing a point. Gives the participant time to fixate on the new target before assuming that the participant is looking at the target. +validation_duration | numeric | 2000 | If `calibration_mode` is set to `view`, then this is the length of time to show a point while calibrating. Note that if `click` calibration is used then the point will remain on the screen until clicked. +point_size | numeric | 20 | Diameter of the validation points in pixels. +show_validation_data | bool | false | If `true` then a visualization of the validation data will be shown on the screen after the validation is complete. This will show each measured gaze location color coded by whether it is within the `roi_radius` of the target point. This is mainly intended for testing and debugging. + +## Data Generated + +In addition to the [default data collected by all plugins](/overview/plugins#data-collected-by-all-plugins), this plugin collects the following data for each trial. + +Name | Type | Value +-----|------|------ +raw_gaze | array | Raw gaze data for the trial. The array will contain a nested array for each validation point. Within each nested array will be a list of `{x,y,dx,dy}` values specifying the absolute x and y pixels, as well as the distance from the target for that gaze point. +percent_in_roi | array | The percentage of samples within the `roi_radius` for each validation point. +average_offset | array | The average `x` and `y` distance from each validation point, plus the median distance `r` of the points from this average offset. +samples_per_sec | numeric | The average number of samples per second. Calculated by finding samples per second for each point and then averaging these estimates together. +validation_points | array | The list of validation points, in the order that they appeared. + +## Example + +#### 4 point validation using center offset mode + +```javascript +var validation = { + type: 'webgazer-validate', + validation_points: [[-200,-200], [-200,200], [200,-200], [200,200]], + validation_point_coordinates: 'center-offset-pixels', + show_validation_data: true +} +``` diff --git a/docs/plugins/list-of-plugins.md b/docs/plugins/list-of-plugins.md new file mode 100644 index 0000000000..4259eeedd6 --- /dev/null +++ b/docs/plugins/list-of-plugins.md @@ -0,0 +1,54 @@ +# List of Plugins + +These are the plugins that are included in the jsPsych release. If you don't see a plugin that will work for your needs, you can post on [GitHub Discussions](https://github.com/jspsych/jsPsych/discussions) to see if anyone else in the community has an unofficial plugin to share or to get help creating a new plugin. You can also view the [documentation on creating a new plugin](/overview/plugins/#creating-a-new-plugin) or [watch a video tutorial on creating a new plugin](https://www.youtube.com/watch?v=XQcsFwAmbiw&list=PLnfo1lBY1P2Mf_o6rV5wiqqn92Mw3UTGh&index=4). + +Plugin | Description +------ | ----------- +[jspsych‑animation](/plugins/jspsych-animation) | Shows a sequence of images at a specified frame rate. Records key presses (including timing information) made by the subject while they are viewing the animation. +[jspsych‑audio‑button‑response](/plugins/jspsych-audio-button-response) | Play an audio file and allow the subject to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. +[jspsych‑audio‑keyboard‑response](/plugins/jspsych-audio-keyboard-response) | Play an audio file and allow the subject to respond by pressing a key. +[jspsych‑audio‑slider‑response](/plugins/jspsych-audio-slider-response) | Play an audio file and allow the subject to respond by moving a slider to indicate a value. +[jspsych‑call‑function](/plugins/jspsych-call-function) | Executes an arbitrary function call. Doesn't display anything to the subject, and the subject is usually unaware that this plugin has even executed. It's useful for performing tasks at specified times in the experiment, such as saving data. +[jspsych‑canvas‑button‑response](/plugins/jspsych-canvas-button-response) | Draw a stimulus on a [HTML canvas element](https://www.w3schools.com/html/html5_canvas.asp), and record a button click response. Useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). +[jspsych‑canvas‑keyboard‑response](/plugins/jspsych-canvas-keyboard-response) | Draw a stimulus on a [HTML canvas element](https://www.w3schools.com/html/html5_canvas.asp), and record a key press response. Useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). +[jspsych‑canvas‑slider‑response](/plugins/jspsych-canvas-slider-response) | Draw a stimulus on a [HTML canvas element](https://www.w3schools.com/html/html5_canvas.asp), and ask the subject to respond by moving a slider to indicate a value. Useful for displaying dynamic, parametrically-defined graphics, and for controlling the positioning of multiple graphical elements (shapes, text, images). +[jspsych‑categorize‑animation](/plugins/jspsych-categorize-animation) | The subject responds to an animation and can be given feedback about their response. +[jspsych‑categorize‑html](/plugins/jspsych-categorize-html) | The subject responds to an HTML-formatted stimulus using the keyboard and can be given feedback about the correctness of their response. +[jspsych‑categorize‑image](/plugins/jspsych-categorize-image) | The subject responds to an image using the keyboard and can be given feedback about the correctness of their response. +[jspsych‑cloze](/plugins/jspsych-cloze) | Plugin for displaying a cloze test and checking participants answers against a correct solution. +[jspsych‑external‑html](/plugins/jspsych-external-html) | Displays an external HTML page (such as a consent form) and lets the subject respond by clicking a button or pressing a key. Plugin can validate their response, which is useful for making sure that a subject has granted consent before starting the experiment. +[jspsych‑free‑sort](/plugins/jspsych-free-sort) | Displays a set of images on the screen in random locations. Subjects can click and drag the images to move them around the screen. Records all the moves made by the subject, so the sequence of moves can be recovered from the data. +[jspsych‑fullscreen](/plugins/jspsych-fullscreen) | Toggles the experiment in and out of fullscreen mode. +[jspsych‑html‑button‑response](/plugins/jspsych-html-button-response) | Display an HTML-formatted stimulus and allow the subject to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. +[jspsych‑html‑keyboard‑response](/plugins/jspsych-html-keyboard-response) | Display an HTML-formatted stimulus and allow the subject to respond by pressing a key. +[jspsych‑html‑slider‑response](/plugins/jspsych-html-slider-response) | Display an HTML-formatted stimulus and allow the subject to respond by moving a slider to indicate a value. +[jspsych‑iat‑html](/plugins/jspsych-iat-html) | The implicit association task, using HTML-formatted stimuli. +[jspsych‑iat‑image](/plugins/jspsych-iat-image) | The implicit association task, using images as stimuli. +[jspsych‑image‑button‑response](/plugins/jspsych-image-button-response) | Display an image and allow the subject to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. +[jspsych‑image‑keyboard‑response](/plugins/jspsych-image-keyboard-response) | Display an image and allow the subject to respond by pressing a key. +[jspsych‑image‑slider‑response](/plugins/jspsych-image-slider-response) | Display an image and allow the subject to respond by moving a slider to indicate a value. +[jspsych‑instructions](/plugins/jspsych-instructions) | For displaying instructions to the subject. Allows the subject to navigate between pages of instructions using keys or buttons. +[jspsych‑maxdiff](/plugins/jspsych-maxdiff) | Displays rows of alternatives to be selected for two mutually-exclusive categories, typically as 'most' or 'least' on a particular criteria (e.g. importance, preference, similarity). The participant responds by selecting one radio button corresponding to an alternative in both the left and right response columns. +[jspsych‑preload](/plugins/jspsych-preload) | This plugin loads images, audio, and video files into the browser's memory before they are needed in the experiment, in order to improve stimulus and response timing, and to avoid disrupting the flow of the experiment. +[jspsych‑rdk](/plugins/jspsych-rdk) | This plugin displays a Random Dot Kinematogram (RDK) and allows the subject to report the primary direction of motion by pressing a key on the keyboard. +[jspsych‑reconstruction](/plugins/jspsych-reconstruction) | The subject interacts with a stimulus by modifying a parameter of the stimulus and observing the change in the stimulus in real-time. +[jspsych‑resize](/plugins/jspsych-resize) | Calibrate the display so that materials display with a known physical size. +[jspsych‑same‑different‑html](/plugins/jspsych-same-different-html) | A same-different judgment task. An HTML-formatted stimulus is shown, followed by a brief gap, and then another stimulus is shown. The subject indicates whether the stimuli are the same or different. +[jspsych‑same‑different‑image](/plugins/jspsych-same-different-image) | A same-different judgment task. An image is shown, followed by a brief gap, and then another stimulus is shown. The subject indicates whether the stimuli are the same or different. +[jspsych‑serial‑reaction‑time](/plugins/jspsych-serial-reaction-time) | A set of boxes are displayed on the screen and one of them changes color. The subject presses a key that corresponds to the different color box as fast as possible. +[jspsych‑serial‑reaction‑time‑mouse](/plugins/jspsych-serial-reaction-time-mouse) | A set of boxes are displayed on the screen and one of them changes color. The subjects clicks the box that changed color as fast as possible. +[jspsych‑survey‑html‑form](/plugins/jspsych-survey-html-form) | Renders a custom HTML form. Allows for mixing multiple kinds of form input. +[jspsych‑survey‑likert](/plugins/jspsych-survey-likert) | Displays likert-style questions. +[jspsych‑survey‑multi‑choice](/plugins/jspsych-survey-multi-choice) | Displays multiple choice questions with one answer allowed per question. +[jspsych‑survey‑multi‑select](/plugins/jspsych-survey-multi-select) | Displays multiple choice questions with multiple answes allowed per question. +[jspsych‑survey‑text](/plugins/jspsych-survey-text) | Shows a prompt with a text box. The subject writes a response and then submits by clicking a button. +[jspsych‑video‑button‑response](/plugins/jspsych-video-button-response) | Displays a video file with many options for customizing playback. Subject responds to the video by pressing a button. +[jspsych‑video‑keyboard‑response](/plugins/jspsych-video-keyboard-response) | Displays a video file with many options for customizing playback. Subject responds to the video by pressing a key. +[jspsych‑video‑slider‑response](/plugins/jspsych-video-slider-response) | Displays a video file with many options for customizing playback. Subject responds to the video by moving a slider. +[jspsych‑virtual‑chinrest](/plugins/jspsych-virtual-chinrest) | An implementation of the "virutal chinrest" procedure developed by [Li, Joo, Yeatman, and Reinecke (2020)](https://doi.org/10.1038/s41598-019-57204-1). Calibrates the monitor to display items at a known physical size by having participants scale an image to be the same size as a physical credit card. Then uses a blind spot task to estimate the distance between the participant and the display. +[jspsych‑visual‑search‑circle](/plugins/jspsych-visual-search-circle) | A customizable visual-search task modelled after [Wang, Cavanagh, & Green (1994)](http://dx.doi.org/10.3758/BF03206946). The subject indicates whether or not a target is present among a set of distractors. The stimuli are displayed in a circle, evenly-spaced, equidistant from a fixation point. +[jspsych‑vsl‑animate‑occlusion](/plugins/jspsych-vsl-animate-occlusion) | A visual statistical learning paradigm based on [Fiser & Aslin (2002)](http://dx.doi.org/10.1037//0278-7393.28.3.458). A sequence of stimuli are shown in an oscillatory motion. An occluding rectangle is in the center of the display, and the stimuli change when they are behind the rectangle. +[jspsych‑vsl‑grid‑scene](/plugins/jspsych-vsl-grid-scene) | A visual statistical learning paradigm based on [Fiser & Aslin (2001)](http://dx.doi.org/10.1111/1467-9280.00392). A scene made up of individual stimuli arranged in a grid is shown. This plugin can also generate the HTML code to render the stimuli for use in other plugins. +[jspsych‑webgazer‑calibrate](/plugins/jspsych-webgazer-calibrate) | Calibrates the WebGazer extension for eye tracking. +[jspsych‑webgazer‑init‑camera](/plugins/jspsych-webgazer-init-camera) | Initializes the camera and helps the participant center their face for eye tracking. +[jspsych‑webgazer‑validate](/plugins/jspsych-webgazer-validate) | Performs validation to measure precision and accuracy of WebGazer eye tracking predictions. diff --git a/docs/plugins/overview.md b/docs/plugins/overview.md deleted file mode 100644 index 6114f93df1..0000000000 --- a/docs/plugins/overview.md +++ /dev/null @@ -1,107 +0,0 @@ -# Plugins - -In jsPsych, plugins define the kinds of tasks that subjects perform in experiments. Some plugins define very general tasks, like displaying instructions or displaying a visual stimulus and getting a keyboard response. Other plugins are more specific, displaying particular kinds of interactive stimuli, or running a specific version of particular kind of task. Creating an experiment with jsPsych involves figuring out which plugins are needed for the kinds of tasks you want to have your subjects perform. - -Plugins provide a structure for a particular task, but often allow for significant customization and flexibility. For example, the `jspsych-image-keyboard-response` plugin defines a simple structure for showing an image and collecting a keyboard response. You can specify the what the stimulus is, what keys the subject is allowed to press, and how long the stimulus should be on the screen, how long the subject has to respond, and so on. Many of these content options have reasonable default values; even though the `jspsych-image-keyboard-response` plugin has many different options, you only *need* to specify the stimulus in order to use it. Each plugin has its own documentation page, which describes what the plugin does and what options are available. - -## Using a plugin - -To use a plugin, you'll need to load the plugin's JavaScript file on your experiment page: - -```html - - - -``` - -Once a plugin is loaded, you can define a trial that uses that plugin. The following JavaScript code defines a trial using the `jspsych-image-keyboard-response` plugin to display an image file ('images/happy_face.jpg'). This trial uses the default values for valid keys, length of display, and other parameters. You could override these values by adding them to the object. - -```javascript -var single_stim_trial = { - type: 'image-keyboard-response', - stimulus: 'images/happy_face.jpg' -} -``` - -Here's an exampe of overriding the default value for `post_trial_gap`: - -```javascript -var single_stim_trial = { - type: 'image-keyboard-response', - stimulus: 'images/happy_face.jpg', - post_trial_gap: 2000 -} -``` - -## Parameters available in all plugins - -Each plugin specifies its own set of parameters. Check the documentation for a plugin to see what parameters are available and what they do. - -In addition, there is a set of parameters that can be specified for any plugin. - -Parameter | Type | Default Value | Description -----------|------|---------------|------------ -post_trial_gap | numeric | null | Sets the time, in milliseconds, between the current trial and the next trial. If null, there will be no gap. -on_finish | function | `function(){ return; }` | A callback function to execute when the trial finishes. See [this page](../overview/callbacks.md) for more details. -on_start | function | `function(){ return; }` | A callback function to execute when the trial begins, before any loading has occurred. See [this page](../overview/callbacks.md) for more details. -on_load | function | `function(){ return; }` | A callback function to execute when the trial has loaded, which typically happens after the initial display of the plugin has loaded. See [this page](../overview/callbacks.md) for more details. -data | object | *undefined* | An object containing additional data to store for the trial. See [this page](../overview/data.md) for more details. - -## Data collected by plugins - -Each plugin defines what data is collected on the trial. The documentation for each plugin specifies what data is collected by that plugin. - -In addition to the data collected by a plugin, there is a default set of data that is collected on every trial. The collected data are: - -Name | Type | Value ------|------|------ -trial_type | string | The name of the plugin used to run the trial. -trial_index | numeric | The index of the current trial across the whole experiment. -time_elapsed | numeric | The number of milliseconds since the start of the experiment when the trial ended. -internal_node_id | string | A string identifier for the current TimelineNode. - -## List of available plugins - -This table is a description of all plugins that are distributed with jsPsych. Click on the name of a plugin to view its documentation page. - -Plugin | Description ------- | ----------- -[jspsych‑animation](jspsych-animation) | Shows a sequence of images at a specified frame rate. Records key presses (including timing information) made by the subject while they are viewing the animation. -[jspsych‑audio‑button‑response](jspsych-audio-button-response) | Play an audio file and allow the subject to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. -[jspsych‑audio‑keyboard‑response](jspsych-audio-keyboard-response) | Play an audio file and allow the subject to respond by pressing a key. -[jspsych‑audio‑slider‑response](jspsych-audio-slider-response) | Play an audio file and allow the subject to respond by moving a slider to indicate a value. -[jspsych‑call‑function](jspsych-call-function) | Executes an arbitrary function call. Doesn't display anything to the subject, and the subject is usually unaware that this plugin has even executed. It's useful for performing tasks at specified times in the experiment, such as saving data. -[jspsych‑categorize‑animation](jspsych-categorize-animation) | The subject responds to an animation and can be given feedback about their response. -[jspsych‑categorize‑html](jspsych-categorize-html) | The subject responds to an HTML-formatted stimulus using the keyboard and can be given feedback about the correctness of their response. -[jspsych‑categorize‑image](jspsych-categorize-image) | The subject responds to an image using the keyboard and can be given feedback about the correctness of their response. -[jspsych‑cloze](jspsych-cloze) | Plugin for displaying a cloze test and checking participants answers against a correct solution. -[jspsych‑external‑html](jspsych-external-html) | Displays an external HTML page (such as a consent form) and lets the subject respond by clicking a button or pressing a key. Plugin can validate their response, which is useful for making sure that a subject has granted consent before starting the experiment. -[jspsych‑free‑sort](jspsych-free-sort) | Displays a set of images on the screen in random locations. Subjects can click and drag the images to move them around the screen. Records all the moves made by the subject, so the sequence of moves can be recovered from the data. -[jspsych‑fullscreen](jspsych-fullscreen) | Toggles the experiment in and out of fullscreen mode. -[jspsych‑html‑button‑response](jspsych-html-button-response) | Display an HTML-formatted stimulus and allow the subject to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. -[jspsych‑html‑keyboard‑response](jspsych-html-keyboard-response) | Display an HTML-formatted stimulus and allow the subject to respond by pressing a key. -[jspsych‑html‑slider‑response](jspsych-html-slider-response) | Display an HTML-formatted stimulus and allow the subject to respond by moving a slider to indicate a value. -[jspsych‑iat‑html](jspsych-iat-html) | The implicit association task, using HTML-formatted stimuli. -[jspsych‑iat‑image](jspsych-iat-image) | The implicit association task, using images as stimuli. -[jspsych‑image‑button‑response](jspsych-image-button-response) | Display an image and allow the subject to respond by choosing a button to click. The button can be customized extensively, e.g., using images in place of standard buttons. -[jspsych‑image‑keyboard‑response](jspsych-image-keyboard-response) | Display an image and allow the subject to respond by pressing a key. -[jspsych‑image‑slider‑response](jspsych-image-slider-response) | Display an image and allow the subject to respond by moving a slider to indicate a value. -[jspsych‑instructions](jspsych-instructions) | For displaying instructions to the subject. Allows the subject to navigate between pages of instructions using keys or buttons. -[jspsych‑rdk](jspsych-rdk) | This plugin displays a Random Dot Kinematogram (RDK) and allows the subject to report the primary direction of motion by pressing a key on the keyboard. -[jspsych‑reconstruction](jspsych-reconstruction) | The subject interacts with a stimulus by modifying a parameter of the stimulus and observing the change in the stimulus in real-time. -[jspsych‑resize](jspsych-resize) | Calibrate the display so that materials display with a known physical size. -[jspsych‑same‑different‑html](jspsych-same-different-html) | A same-different judgment task. An HTML-formatted stimulus is shown, followed by a brief gap, and then another stimulus is shown. The subject indicates whether the stimuli are the same or different. -[jspsych‑same‑different‑image](jspsych-same-different-image) | A same-different judgment task. An image is shown, followed by a brief gap, and then another stimulus is shown. The subject indicates whether the stimuli are the same or different. -[jspsych‑serial‑reaction‑time](jspsych-serial-reaction-time) | A set of boxes are displayed on the screen and one of them changes color. The subject presses a key that corresponds to the different color box as fast as possible. -[jspsych‑serial‑reaction‑time‑mouse](jspsych-serial-reaction-time-mouse) | A set of boxes are displayed on the screen and one of them changes color. The subjects clicks the box that changed color as fast as possible. -[jspsych‑survey‑html‑form](jspsych-survey-html-form) | Renders a custom HTML form. Allows for mixing multiple kinds of form input. -[jspsych‑survey‑likert](jspsych-survey-likert) | Displays likert-style questions. -[jspsych‑survey‑multi‑choice](jspsych-survey-multi-choice) | Displays multiple choice questions with one answer allowed per question. -[jspsych‑survey‑multi‑select](jspsych-survey-multi-select) | Displays multiple choice questions with multiple answes allowed per question. -[jspsych‑survey‑text](jspsych-survey-text) | Shows a prompt with a text box. The subject writes a response and then submits by clicking a button. -[jspsych‑video‑button‑response](jspsych-video-button-response) | Displays a video file with many options for customizing playback. Subject responds to the video by pressing a button. -[jspsych‑video‑keyboard‑response](jspsych-video-keyboard-response) | Displays a video file with many options for customizing playback. Subject responds to the video by pressing a key. -[jspsych‑video‑slider‑response](jspsych-video-slider-response) | Displays a video file with many options for customizing playback. Subject responds to the video by moving a slider. -[jspsych‑visual‑search‑circle](jspsych-visual-search-circle) | A customizable visual-search task modelled after [Wang, Cavanagh, & Green (1994)](http://dx.doi.org/10.3758/BF03206946). The subject indicates whether or not a target is present among a set of distractors. The stimuli are displayed in a circle, evenly-spaced, equidistant from a fixation point. -[jspsych‑vsl‑animate‑occlusion](jspsych-vsl-animate-occlusion) | A visual statistical learning paradigm based on [Fiser & Aslin (2002)](http://dx.doi.org/10.1037//0278-7393.28.3.458). A sequence of stimuli are shown in an oscillatory motion. An occluding rectangle is in the center of the display, and the stimuli change when they are behind the rectangle. -[jspsych‑vsl‑grid‑scene](jspsych-vsl-grid-scene) | A visual statistical learning paradigm based on [Fiser & Aslin (2001)](http://dx.doi.org/10.1111/1467-9280.00392). A scene made up of individual stimuli arranged in a grid is shown. This plugin can also generate the HTML code to render the stimuli for use in other plugins. diff --git a/docs/tutorials/hello-world.md b/docs/tutorials/hello-world.md index fc040be0ad..f5b1fdc08f 100644 --- a/docs/tutorials/hello-world.md +++ b/docs/tutorials/hello-world.md @@ -2,30 +2,51 @@ In the long tradition of **"Hello world!"** examples, this tutorial creates an experiment that outputs the phrase "Hello world!" to the browser. Though useless as an actual experiment, the process is helpful for learning the basics of using the jsPsych library. This tutorial will assume that you know very little about how to set up a web page. +!!! info + If you would like to use modern web development tools (e.g. ES6 modules, Node/NPM, webpack, Babel), you may want to check out the [jsPsych Builder](https://github.com/bjoluc/jspsych-builder) CLI utility. jsPsych Builder allows you to automate the experiment setup, spin up a development server, and transpile and bundle scripts and styles. Using jsPsych Builder will automate some of the steps in this tutorial, so if you prefer that option, you may want to switch to the getting started instructions on the jsPsych Builder GitHub page. + ## Step 1: Download the jsPsych library -Start by downloading the jsPsych library. The most recent version can always be found on the [GitHub releases page](https://github.com/jodeleeuw/jsPsych/releases). +Start by downloading the jsPsych library. The most recent version can always be found on the [GitHub releases page](https://github.com/jspsych/jsPsych/releases). *Note: the image below shows version 4.2, but the process is the same for the most recent version.* - + ![releasespage](/img/githubreleases.jpg) +!!! warning + We strongly recommend downloading the latest release of the code rather than downloading the zip file of the code via the *Big Green Button* on the GitHub site. Downloading the code via the *Big Green Button* may give you a copy of the library that is in development and contains bugs. + ## Step 2: Create a folder to store your experiment files -Create a folder on your computer to put the experiment files in. Once you've created the folder, open the downloaded archive from step 1, and move the extracted folder (called `jspsych-6.0.5` if using v6.0.5 of jsPsych) into the experiment folder. Here's what it looks like on a Windows machine: +Create a folder on your computer to put the experiment files in. Once you've created the folder, open the downloaded archive from step 1, and move the extracted folder (called `jspsych-6.3.0` if using v6.3.0 of jsPsych) into the experiment folder. -![folder setup](/img/folder-setup.png) +``` +📂 My Experiment +-- 📂 jspsych-6.3.0 +``` -## Step 3: Create a new HTML file +If you open up the `jspsych-6.3.0` folder you should see this structure. -To edit jsPsych code you'll need a programming-friendly text editor. Some free options are: +``` +📂 My Experiment +-- 📂 jspsych-6.3.0 +---- 📂 css +---- 📂 examples +---- 📂 plugins +---- 📄 jspsych.js +``` + +## Step 3: Create a new HTML file -* [Atom](https://atom.io) (Windows, OSX, Linux) -* [VSCode](https://code.visualstudio.com/) (Windows, OSX, Linux) +To edit jsPsych code you'll need a programming-friendly text editor. A great free option is [Visual Studio Code](https://code.visualstudio.com/) (Windows, OSX, Linux). Once you've got a text editor that you like, create a new file in the experiment folder called `experiment.html` -![folder setup](/img/folder-with-html.png) +``` +📂 My Experiment +-- 📂 jspsych-6.3.0 +-- 📄 experiment.html +``` ## Step 4: Add the bare-minimum HTML code @@ -52,7 +73,7 @@ To use jsPsych, add a ` + @@ -65,8 +86,8 @@ You may also want to import the jsPsych stylesheet, which applies a basic set of My experiment - - + + @@ -81,9 +102,9 @@ For the demo, we want to show some text on the screen. This is exactly what the My experiment - - - + + + @@ -98,9 +119,9 @@ To add JavaScript code directly to the webpage we need to add a set of ` - - + + + - - + + + - - + + + @@ -36,9 +37,9 @@ All jsPsych experiments are defined by a timeline. The timeline is an array that var timeline = []; ``` -Let's greet the subject with a simple welcome message using the [jspsych-html-keyboard-response](../plugins/jspsych-html-keyboard-response.md) plugin. +Let's greet the subject with a simple welcome message using the [jspsych-html-keyboard-response](/plugins/jspsych-html-keyboard-response.md) plugin. -First, we create a trial that uses the jspsych-html-keyboard-response plugin and contains a simple string to show the subject. +First, we create a trial that uses the `jspsych-html-keyboard-response` plugin and contains a simple string to show the subject. ```javascript var welcome = { @@ -60,63 +61,69 @@ jsPsych.init({ timeline: timeline }); ``` - -### The complete code so far - -```html - - - - My experiment - - - - - - - -``` +After each step in the tutorial you can view the complete code up to that point by clicking on the expandable box below. + +??? example "The complete code so far" + ``` html + + + + My experiment + + + + + + + + ``` ## Part 3: Show instructions -We can use the same basic structure from part 2 to create a new trial that shows instructions to the subject. The only difference in this trial is that we will use HTML formatting to control how the instructions display. +We can use the same basic structure from part 2 to create a new trial that shows instructions to the subject. The only difference in this trial is that we will use HTML formatting to control how the instructions display and we will add a two second gap after the trial using the `post_trial_gap` parameter. The trial definition looks like this: ```javascript var instructions = { type: "html-keyboard-response", - stimulus: "

In this experiment, a circle will appear in the center " + - "of the screen.

If the circle is blue, " + - "press the letter F on the keyboard as fast as you can.

" + - "

If the circle is orange, press the letter J " + - "as fast as you can.

" + - "
"+ - "
" + - "

Press the F key

" + - "
" + - "

Press the J key

" + - "
"+ - "

Press any key to begin.

" + stimulus: ` +

In this experiment, a circle will appear in the center + of the screen.

If the circle is blue, + press the letter F on the keyboard as fast as you can.

+

If the circle is orange, press the letter J + as fast as you can.

+
+
+

Press the F key

+
+

Press the J key

+
+

Press any key to begin.

+ `, + post_trial_gap: 2000 }; ``` +!!! tip + In JavaScript there are three different ways to define a `string`. You can use single quotes `'`, double quotes `"`, or backticks `` ` ``. Using backticks has two advantages over the other approaches, especially when you are creating long strings with HTML. You can extend the `string` across multiple lines and you can use [template strings](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals) to easily incorporate variables. + Notice that the HTML includes `` tags to display the images that the subject will be responding to. You'll need to download these image files. Right-click on each image below and select *Save image as...*. Put the images in a folder called `img` in the experiment folder you created in part 1. ![blue circle](../img/blue.png) @@ -128,68 +135,69 @@ Don't forget to add the trial to the timeline: timeline.push(instructions); ``` -### The complete code so far - -```html - - - - My experiment - - - - - - - -``` +??? example "The complete code so far" + ```html + + + + My experiment + + + + + + + + ``` ## Part 4: Displaying stimuli and getting responses Creating trials to show the stimuli is conceptually the same as creating a trial to show instructions, except that now we are displaying an image instead of text or html. This means we need to use a different plugin: jspsych-image-keyboard-response. We need to start by loading this plugin by adding a ` - - - + + + + ``` @@ -215,75 +223,188 @@ As usual, we need to add the trials to the timeline. timeline.push(blue_trial, orange_trial); ``` -### The complete code so far - -```html - - - - My experiment - - - - - - - + + + + + + + + ``` + +## Part 5: Preloading media + +Whenever we use media elements (images, audio, or video) in an experiment it is a good idea to preload them prior to needing them for a trial. By preloading media we ask the participant's browser to download the media ahead of needing it, so that when we do need to display or play it there is no lag from needing to download it. + +We are going to use the [jspsych-preload plugin](/plugins/jspsych-preload.md) to preload the two images. The [media preloading section](/overview/media-preloading.md) goes into a lot of detail about various options for preloading and different ways that you can use this plugin. Here we are simply going to give the plugin a list of the files that we want to be preloaded. + +First we need to add the preload plugin to our `` section. + +```html hl_lines="6" + + My experiment + + + + + + +``` - /* define instructions trial */ - var instructions = { - type: "html-keyboard-response", - stimulus: "

In this experiment, a circle will appear in the center " + - "of the screen.

If the circle is blue, " + - "press the letter F on the keyboard as fast as you can.

" + - "

If the circle is orange, press the letter J " + - "as fast as you can.

" + - "
"+ - "
" + - "

Press the F key

" + - "
" + - "

Press the J key

" + - "
"+ - "

Press any key to begin.

", - post_trial_gap: 2000 - }; - timeline.push(instructions); - /* test trials */ - var blue_trial = { - type: 'image-keyboard-response', - stimulus: 'img/blue.png', - choices: ['f', 'j'] - }; +We'll put this trial at the very start of the experiment, so add this code before the `welcome` trial. - var orange_trial = { - type: 'image-keyboard-response', - stimulus: 'img/orange.png', - choices: ['f', 'j'] - } +```js +var preload = { + type: 'preload', + images: ['img/blue.png', 'img/orange.png'] +} +``` - timeline.push(blue_trial, orange_trial); +As always, add the trial to the timeline. - /* start the experiment */ - jsPsych.init({ - timeline: timeline - }); - - +```js +timeline.push(preload); ``` - -## Part 5: Timeline variables +??? example "The complete code so far" + + ```html + + + + My experiment + + + + + + + + + + ``` + +## Part 6: Timeline variables In the full experiment, we will want more than two trials. One way we could do this is to create many more objects that define trials and push them all onto the timeline, but there is a more efficient way: using timeline variables. @@ -328,96 +449,106 @@ var test_procedure = { } ``` -We have to add the `test_procedure` to the main `timeline` array, but the fixation and test trial do not need to be added to `timeline` because they already exist on the `test_procedure` timeline. +We have to add the `test_procedure` to the main `timeline` array, but the `fixation` and `test` trial do not need to be added to `timeline` because they already exist on the `test_procedure` timeline. ```javascript timeline.push(test_procedure); ``` -What happens when the experiment reaches the test procedure? jsPsych will run the `test_procedure` timeline one time for each entry in the `test_stimuli` array (twice, in this case). The first time through, jsPsych will substitute the timeline variables from the first array entry (blue image), and the second time through the second array entry will be used (orange image). Notice that the fixation trial occurs before both the orange and the blue circles, because the entire timeline of the `test_procedure` is repeated for each entry in the `timeline_variables` array. - -### The complete code so far - -```html - - - - My experiment - - - - - - - - -``` - - -## Part 6: Parameters for timelines with timeline variables - -Right now our experiment is a measly two trials long. Even worse is that the order of the stimuli is the same every time! When we use timeline variables, we get access to some very easy-to-use methods to randomize the order and repeat the trials. To randomize the order, simply set `randomize_order: true` on the object with the `timeline_variables`: +What happens when the experiment reaches the test procedure? jsPsych will run the `test_procedure` timeline one time for each entry in the `test_stimuli` array (two times total, in this case). The first time through, jsPsych will substitute the timeline variables from the first array entry (blue image), and the second time through the second array entry will be used (orange image). Notice that the fixation trial occurs before both the orange and the blue circles, because the entire timeline of the `test_procedure` is repeated for each entry in the `timeline_variables` array. + +??? example "The complete code so far" + + ```html + + + + My experiment + + + + + + + + + + ``` + + +## Part 7: Parameters for timelines with timeline variables + +Right now our experiment is a measly two trials long. Even worse is that the order of the stimuli is the same every time! When we use timeline variables, we get access to some methods to randomize the order and repeat the trials. To randomize the order, simply set `randomize_order: true` on the object with the `timeline_variables`: ```javascript var test_procedure = { @@ -437,91 +568,101 @@ var test_procedure = { repetitions: 5 } ``` -### The complete code so far - -```html - - - - My experiment - - - - - - - - -``` - -## Part 7: Using functions to generate parameters +??? example "The complete code so far" + + ```html + + + + My experiment + + + + + + + + + + ``` + +## Part 8: Using functions to generate parameters One aspect of the experiment that could be improved is the duration of the fixation cross. As the experiment stands right now, the timing of the circles appearing is very predictable. We can change that by using a different value for the `trial_duration` parameter in the `fixation` trial for each trial. But how can we do that and keep the simple code structure we have now where we only have to define the fixation trial once? One option would be to add another timeline variable, like `"fixation_duration"` and use that to control the timing. But another option is to specify the `trial_duration` parameter as a function. If a parameter is a function, jsPsych will execute the function every time the trial runs. That means that if the function returns different results probabilistically, we can get a different parameter value every time the trial runs. -To do that here, we'll use one of the built-in randomization methods in [jsPsych's randomization module](../core_library/jspsych-randomization.md). `jsPsych.randomization.sampleWithoutReplacement()` takes an array of items to sample from and generates a new array of length *N* by sampling without replacement. +To do that here, we'll use one of the built-in randomization methods in [jsPsych's randomization module](/core_library/jspsych-randomization.md). `jsPsych.randomization.sampleWithoutReplacement()` takes an array of items to sample from and generates a new array of length *N* by sampling without replacement. ```javascript var fixation = { @@ -536,93 +677,103 @@ var fixation = { In the code above, we replaced the `trial_duration: 1000` parameter in `fixation` with a function. Inside the function, we take a sample from the array `[250, 500, 750, 1000, 1250, 1500, 1750, 2000]` of size 1 (second parameter to `jsPsych.randomization.sampleWithoutReplacement`). The return value from calling `jsPsych.randomization.sampleWithoutReplacement` is an array of length 1, so we add the `[0]` selection at the end to get the value out of the array. -### The complete code so far - -```html - - - - My experiment - - - - - - - - -``` - -## Part 8: Displaying the data - -We have created a complete, if simple, experiment at this point, so let's take a look at the data being generated. jsPsych has a built-in [function called `jsPsych.data.displayData()`](../core_library/jspsych-data.md#jspsychdatadisplaydata) that is useful for debugging your experiment. It will remove all of the information on the screen and replace it with the raw data collected so far. This isn't terribly useful when you are actually running an experiment, but it's nice for checking the data during development. - -We need the `displayData` function to execute when the experiment ends. One way to do this is to use the [`on_finish` callback function](../overview/callbacks.md#on_finish-experiment). This function will automatically execute once all the trials in the experiment are finished. We can specify a function to call in the `init` method. +??? example "The complete code so far" + + ```html + + + + My experiment + + + + + + + + + + ``` + +## Part 10: Displaying the data + +We have created a complete, if simple, experiment at this point, so let's take a look at the data being generated. jsPsych has a built-in [function called `jsPsych.data.displayData()`](/core_library/jspsych-data.md#jspsychdatadisplaydata) that is useful for debugging your experiment. It will remove all of the information on the screen and replace it with the raw data collected so far. This isn't terribly useful when you are actually running an experiment, but it's nice for checking the data during development. + +We need the `displayData` function to execute when the experiment ends. One way to do this is to use the [`on_finish` callback function](/overview/callbacks.md#on_finish-experiment). This function will automatically execute once all the trials in the experiment are finished. We can specify a function to call in the `init` method. ```javascript jsPsych.init({ @@ -633,119 +784,144 @@ jsPsych.init({ }); ``` -### The complete code so far - -```html - - - - My experiment - - - - - - - + + + + + + + + + ``` + +## Part 11: Tagging trials with additional data - var test = { - type: "image-keyboard-response", - stimulus: jsPsych.timelineVariable('stimulus'), - choices: ['f', 'j'] - } - - var test_procedure = { - timeline: [fixation, test], - timeline_variables: test_stimuli, - randomize_order: true, - repetitions: 5 - } +All trials in jsPsych can be tagged with additional arbitrary data. This data will get stored alongside the data that the plugin normally generates, which allows experimenters to record properties of a trial along with the data from the trial. - timeline.push(test_procedure); +When might you use this feature? In this experiment, it would be nice to tag each trial with a circle as a `response` trial, so that the resulting data can be easily filtered to look at only the critical trials. We can do that like this. - /* start the experiment */ - jsPsych.init({ - timeline: timeline, - on_finish: function() { - jsPsych.data.displayData(); - } - }); - - +```javascript +var test = { + type: "image-keyboard-response", + stimulus: jsPsych.timelineVariable('stimulus'), + choices: ['f', 'j'], + data: { + task: 'response' + } +} ``` -## Part 9: Tagging trials with additional data - -All trials in jsPsych can be tagged with additional arbitrary data. This data will get stored alongside the data that the plugin normally generates, which allows experimenters to record properties of a trial along with the data from the trial. - -When might you use this feature? In this experiment, it would be nice to tag each trial with a circle as a test trial, so that the resulting data can be easily filtered to look at only the test trials. We also could tag the test trials with a property that indicates what the correct response should be (F for the blue circles, J for the orange). - -In our current code, we are using the timeline variables feature of jsPsych to choose which circle gets presented on a trial. Since we want to tag the trials differently based on which circle is presented, we need to add the tagging data to the `test_stimuli` array, and then use the `jsPsych.timelineVariable()` function to get the values and assign them to the `data` property of the trial. +We also could tag the test trials with a property that indicates what the correct response should be (F for the blue circles, J for the orange). In our current code, we are using the timeline variables feature of jsPsych to choose which circle gets presented on a trial. Since we want to tag the trials differently based on which circle is presented, we need to add the tagging data to the `test_stimuli` array, and then use the `jsPsych.timelineVariable()` function to get the value and assign it to a property in the `data` of the trial. We start by modifying `test_stimuli`: ```javascript var test_stimuli = [ - { stimulus: "img/blue.png", data: {test_part: 'test', correct_response: 'f'}}, - { stimulus: "img/orange.png", data: {test_part: 'test', correct_response: 'j'}} + { stimulus: "img/blue.png", correct_response: 'f'}, + { stimulus: "img/orange.png", correct_response: 'j'} ]; ``` -Now we assign these values to the `data` parameter of the `test` trial. + +Now we can use `timelineVariable()` in the `data` parameter of the `test` trial. ```javascript var test = { type: "image-keyboard-response", stimulus: jsPsych.timelineVariable('stimulus'), choices: ['f', 'j'], - data: jsPsych.timelineVariable('data') + data: { + task: 'response', + correct_response: jsPsych.timelineVariable('correct_response') + } } ``` -Another kind of tagging that would be useful is to mark each fixation trial as such, to make removing the data from fixation trials easier. This is a simpler task, as we don't need to use the timeline variables feature. We can just add a `data` property to the `fixation` trial: +Another kind of tagging that would be useful is to mark each fixation trial as such, to make removing the data from fixation trials easier. ```js var fixation = { @@ -755,102 +931,119 @@ var fixation = { trial_duration: function(){ return jsPsych.randomization.sampleWithoutReplacement([250, 500, 750, 1000, 1250, 1500, 1750, 2000], 1)[0]; }, - data: {test_part: 'fixation'} + data: { + task: 'fixation' + } } ``` -### The complete code so far - -```html - - - - My experiment - - - - - - - - -``` - -## Part 10: Manipulating data during the experiment - -Now that the data from the test trials has a tag that describes the correct response, it would be easy to analyze the data after the fact (in R, for example) and calculate whether the participant responded correctly. - -But, we can also do this in jsPsych as the experiment runs to save time later and enable a limited set of data aggregation and analysis directly in the experiment code. +??? example "The complete code so far" + + ```html + + + + My experiment + + + + + + + + + + ``` + +## Part 12: Manipulating data during the experiment + +Now that the data from the test trials has a tag that describes the correct response, it would be easy to analyze the data after the fact and calculate whether the participant responded correctly. + +But, we can also do this in jsPsych as the experiment runs to save time later and enable a limited set of data analysis directly in the experiment code. To do this, we'll use the `on_finish` event of the test trial. We can assign a function to `on_finish`, and that function will receive an object containing the data generated by the trial. This object can be manipulated inside the function, and any changes made to the object will be stored in jsPsych's internal representation of the data. @@ -861,125 +1054,145 @@ var test = { type: "image-keyboard-response", stimulus: jsPsych.timelineVariable('stimulus'), choices: ['f', 'j'], - data: jsPsych.timelineVariable('data'), + data: { + task: 'response', + correct_response: jsPsych.timelineVariable('correct_response') + }, on_finish: function(data){ - data.correct = data.key_press == jsPsych.pluginAPI.convertKeyCharacterToKeyCode(data.correct_response); + data.correct = jsPsych.pluginAPI.compareKeys(data.response, data.correct_response); } } ``` -The `data.key_press` value is a numeric key code indicating which key the subject pressed. The function `jsPsych.pluginAPI.convertKeyCharacterToKeyCode` converts the character representation of a key into the numeric representation (e.g., calling the function on the value `'f'` generates the value `70`). If this numeric value matches `data.key_press` then `data.correct` will be `true`. Otherwise, it will be `false`. - -### The complete code so far - -```html - - - - My experiment - - - - - - - - -``` - - -## Part 11: Data aggregation - -A new feature in jsPsych version 6.0 is a suite of data aggregation functions. You can now easily calculate things like mean response times for a selected set of trials. In this part, we'll use these functions to add a final trial to the experiment that tells the subject their accuracy and their mean response time for correct responses. - -We'll use the text plugin. Because the actual text that we want to display changes based on the subject's performance in the experiment, we need to use a function for the `text` parameter and return the desired text. +The `data.response` value is a string representation of the key the subject pressed. We can compare this with the `data.correct_response` value, and assign this computed value to a new property `data.correct`. + +??? example "The complete code so far" + + ```html + + + + My experiment + + + + + + + + + + ``` + + +## Part 13: Data aggregation + +jsPsych provides a limited set of analysis functions to allow you to calculate things like mean response times for a selected set of trials. In this part, we'll use these functions to add a final trial to the experiment that tells the subject their accuracy and their mean response time for correct responses. + +We'll use the `html-keyboard-response` plugin. Because the text that we want to display changes based on the subject's performance in the experiment, we need to use a function for the `stimulus` parameter and return the desired text. + +Here's what the code looks like, and a description follows below. ```js var debrief_block = { type: "html-keyboard-response", stimulus: function() { - var trials = jsPsych.data.get().filter({test_part: 'test'}); + var trials = jsPsych.data.get().filter({task: 'response'}); var correct_trials = trials.filter({correct: true}); var accuracy = Math.round(correct_trials.count() / trials.count() * 100); var rt = Math.round(correct_trials.select('rt').mean()); - return "

You responded correctly on "+accuracy+"% of the trials.

"+ - "

Your average response time was "+rt+"ms.

"+ - "

Press any key to complete the experiment. Thank you!

"; + return `

You responded correctly on ${accuracy}% of the trials.

+

Your average response time was ${rt}ms.

+

Press any key to complete the experiment. Thank you!

`; } }; @@ -987,7 +1200,7 @@ var debrief_block = { timeline.push(debrief_block); ``` -To create the variable `trials`, we use `jsPsych.data.get()` which returns a jsPsych data collection containing all of the data from the experiment. We can then use `.filter` to select only the trials where `test_part` is `'test'` (a benefit of tagging the trials in part 9). `trials` contains all of the data from the trials where a circle was shown. +To create the variable `trials`, we use `jsPsych.data.get()` which returns a jsPsych data collection containing all of the data from the experiment. We can then use `.filter` to select only the trials where `task` is `'response'` (a benefit of tagging the trials in part 11). `trials` contains all of the data from the trials where a circle was shown. To get only the correct trials, we can use `.filter()` again to select only the trials from the `trials` data collection where the property `correct` is `true`. @@ -997,17 +1210,18 @@ Finally, to calculate the mean response time on correct trials, we use the `.sel ## The final code -This code is available in the examples folder in the jsPsych download. It is called `demo-simple-rt-task.html`. +This code is available in the `/examples` folder in the jsPsych release download. It is called `demo-simple-rt-task.html`. ```html My experiment - - - - + + + + + - + + + + + + + + diff --git a/examples/conditional-and-loop-functions.html b/examples/conditional-and-loop-functions.html index e57b997b00..a554ded625 100644 --- a/examples/conditional-and-loop-functions.html +++ b/examples/conditional-and-loop-functions.html @@ -3,21 +3,22 @@ - + + + + + + + + + + + diff --git a/examples/data-add-properties.html b/examples/data-add-properties.html index 9793c790dc..a56e961267 100644 --- a/examples/data-add-properties.html +++ b/examples/data-add-properties.html @@ -4,20 +4,24 @@ - - + + - - + + - - + + - +

The URL variable should be logged to the console

diff --git a/examples/demo-flanker.html b/examples/demo-flanker.html index eccb385301..9dbe998624 100644 --- a/examples/demo-flanker.html +++ b/examples/demo-flanker.html @@ -5,6 +5,7 @@ + @@ -54,15 +55,15 @@ var test = { timeline: [{ type: 'image-keyboard-response', - choices: [37, 39], + choices: ['ArrowLeft', 'ArrowRight'], trial_duration: 1500, stimulus: jsPsych.timelineVariable('stimulus'), data: jsPsych.timelineVariable('data'), on_finish: function(data){ var correct = false; - if(data.direction == 'left' && data.key_press == 37 && data.rt > -1){ + if(data.direction == 'left' && jsPsych.pluginAPI.compareKeys(data.response, 'ArrowLeft') && data.rt > -1){ correct = true; - } else if(data.direction == 'right' && data.key_press == 39 && data.rt > -1){ + } else if(data.direction == 'right' && jsPsych.pluginAPI.compareKeys(data.response, 'ArrowRight') && data.rt > -1){ correct = true; } data.correct = correct; @@ -90,8 +91,16 @@ } }; + // manually preload images due to presenting them with timeline variables + var images = ["img/con1.png","img/con2.png","img/inc1.png","img/inc2.png"]; + var preload = { + type: 'preload', + images: images + } + /*set up experiment structure*/ var timeline = []; + timeline.push(preload); timeline.push(welcome); timeline.push(instructions); timeline.push(test); diff --git a/examples/demo-simple-rt-task.html b/examples/demo-simple-rt-task.html index 62a9d7ed7e..efd31c769d 100644 --- a/examples/demo-simple-rt-task.html +++ b/examples/demo-simple-rt-task.html @@ -1,104 +1,120 @@ - - My experiment - - - - - - - + + + + + + + + - + }; + timeline.push(debrief_block); + + /* start the experiment */ + jsPsych.init({ + timeline: timeline, + on_finish: function () { + jsPsych.data.displayData(); + } + }); + + + \ No newline at end of file diff --git a/examples/demos/demo_1.html b/examples/demos/demo_1.html index 4586247b69..7cc386ffe7 100644 --- a/examples/demos/demo_1.html +++ b/examples/demos/demo_1.html @@ -3,11 +3,17 @@ + \ No newline at end of file diff --git a/examples/demos/demo_2.html b/examples/demos/demo_2.html index 6a6fbddddd..561072eadf 100644 --- a/examples/demos/demo_2.html +++ b/examples/demos/demo_2.html @@ -3,41 +3,48 @@ - - + + - \ No newline at end of file + diff --git a/examples/demos/demo_3.html b/examples/demos/demo_3.html index c71d37d115..80065c715c 100644 --- a/examples/demos/demo_3.html +++ b/examples/demos/demo_3.html @@ -5,9 +5,12 @@ @@ -23,11 +26,12 @@ var test = { timeline: [{ type: 'html-keyboard-response', - choices: [37, 39], + choices: ["ArrowLeft", "ArrowRight"], stimulus: jsPsych.timelineVariable('stimulus'), data: jsPsych.timelineVariable('data'), post_trial_gap: 1500, - response_ends_trial: true + response_ends_trial: true, + css_classes: ['flanker-stimulus'] }], timeline_variables: test_stimuli, sample: {type: 'fixed-repetitions', size: 2} @@ -39,9 +43,10 @@ var congruent_rt = Math.round(jsPsych.data.get() .filter({stim_type: 'congruent'}).select('rt').mean()); var incongruent_rt = Math.round(jsPsych.data.get().filter({stim_type: 'incongruent'}).select('rt').mean()); - return "

Your average response time for congruent trials was " + congruent_rt + "ms.

"+ - "

Your average response time for incongruent trials was " + incongruent_rt + "ms.

"; - } + return "

Your average response time for congruent trials was " + congruent_rt + "ms.

"+ + "

Your average response time for incongruent trials was " + incongruent_rt + "ms.

"; + }, + css_classes: ['debrief-text'] }; var timeline = []; diff --git a/examples/display-element-to-embed-experiment.html b/examples/display-element-to-embed-experiment.html index 3ad72553c7..17c5388c28 100644 --- a/examples/display-element-to-embed-experiment.html +++ b/examples/display-element-to-embed-experiment.html @@ -6,12 +6,10 @@ - + + @@ -31,36 +29,43 @@ - - + + - - + + diff --git a/examples/exclusions.html b/examples/exclusions.html index 28cd8865cb..3506df9a2e 100644 --- a/examples/exclusions.html +++ b/examples/exclusions.html @@ -4,12 +4,7 @@ - - + @@ -17,10 +12,9 @@ +// +// Math.seedrandom('yipee'); Sets Math.random to a function that is +// initialized using the given explicit seed. +// +// Math.seedrandom(); Sets Math.random to a function that is +// seeded using the current time, dom state, +// and other accumulated local entropy. +// The generated seed string is returned. +// +// Math.seedrandom('yowza', true); +// Seeds using the given explicit seed mixed +// together with accumulated entropy. +// +// +// Seeds using physical random bits downloaded +// from random.org. +// +// Seeds using urandom bits from call.jsonlib.com, +// which is faster than random.org. +// +// Examples: +// +// Math.seedrandom("hello"); // Use "hello" as the seed. +// document.write(Math.random()); // Always 0.5463663768140734 +// document.write(Math.random()); // Always 0.43973793770592234 +// var rng1 = Math.random; // Remember the current prng. +// +// var autoseed = Math.seedrandom(); // New prng with an automatic seed. +// document.write(Math.random()); // Pretty much unpredictable. +// +// Math.random = rng1; // Continue "hello" prng sequence. +// document.write(Math.random()); // Always 0.554769432473455 +// +// Math.seedrandom(autoseed); // Restart at the previous seed. +// document.write(Math.random()); // Repeat the 'unpredictable' value. +// +// Notes: +// +// Each time seedrandom('arg') is called, entropy from the passed seed +// is accumulated in a pool to help generate future seeds for the +// zero-argument form of Math.seedrandom, so entropy can be injected over +// time by calling seedrandom with explicit data repeatedly. +// +// On speed - This javascript implementation of Math.random() is about +// 3-10x slower than the built-in Math.random() because it is not native +// code, but this is typically fast enough anyway. Seeding is more expensive, +// especially if you use auto-seeding. Some details (timings on Chrome 4): +// +// Our Math.random() - avg less than 0.002 milliseconds per call +// seedrandom('explicit') - avg less than 0.5 milliseconds per call +// seedrandom('explicit', true) - avg less than 2 milliseconds per call +// seedrandom() - avg about 38 milliseconds per call +// +// LICENSE (BSD): +// +// Copyright 2010 David Bau, all rights reserved. +// +// Redistribution and use in source and binary forms, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright +// notice, this list of conditions and the following disclaimer. +// +// 2. Redistributions in binary form must reproduce the above copyright +// notice, this list of conditions and the following disclaimer in the +// documentation and/or other materials provided with the distribution. +// +// 3. Neither the name of this module nor the names of its contributors may +// be used to endorse or promote products derived from this software +// without specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +// +/** + * All code is in an anonymous closure to keep the global namespace clean. + * + * @param {number=} overflow + * @param {number=} startdenom + */ + +// Patched by Seb so that seedrandom.js does not pollute the Math object. +// My tests suggest that doing Math.trouble = 1 makes Math lookups about 5% +// slower. +numeric.seedrandom = { pow:Math.pow, random:Math.random }; + +(function (pool, math, width, chunks, significance, overflow, startdenom) { + + +// +// seedrandom() +// This is the seedrandom function described above. +// +math['seedrandom'] = function seedrandom(seed, use_entropy) { + var key = []; + var arc4; + + // Flatten the seed string or build one from local entropy if needed. + seed = mixkey(flatten( + use_entropy ? [seed, pool] : + arguments.length ? seed : + [new Date().getTime(), pool, window], 3), key); + + // Use the seed to initialize an ARC4 generator. + arc4 = new ARC4(key); + + // Mix the randomness into accumulated entropy. + mixkey(arc4.S, pool); + + // Override Math.random + + // This function returns a random double in [0, 1) that contains + // randomness in every bit of the mantissa of the IEEE 754 value. + + math['random'] = function random() { // Closure to return a random double: + var n = arc4.g(chunks); // Start with a numerator n < 2 ^ 48 + var d = startdenom; // and denominator d = 2 ^ 48. + var x = 0; // and no 'extra last byte'. + while (n < significance) { // Fill up all significant digits by + n = (n + x) * width; // shifting numerator and + d *= width; // denominator and generating a + x = arc4.g(1); // new least-significant-byte. + } + while (n >= overflow) { // To avoid rounding up, before adding + n /= 2; // last byte, shift everything + d /= 2; // right using integer math until + x >>>= 1; // we have exactly the desired bits. + } + return (n + x) / d; // Form the number within [0, 1). + }; + + // Return the seed that was used + return seed; +}; + +// +// ARC4 +// +// An ARC4 implementation. The constructor takes a key in the form of +// an array of at most (width) integers that should be 0 <= x < (width). +// +// The g(count) method returns a pseudorandom integer that concatenates +// the next (count) outputs from ARC4. Its return value is a number x +// that is in the range 0 <= x < (width ^ count). +// +/** @constructor */ +function ARC4(key) { + var t, u, me = this, keylen = key.length; + var i = 0, j = me.i = me.j = me.m = 0; + me.S = []; + me.c = []; + + // The empty key [] is treated as [0]. + if (!keylen) { key = [keylen++]; } + + // Set up S using the standard key scheduling algorithm. + while (i < width) { me.S[i] = i++; } + for (i = 0; i < width; i++) { + t = me.S[i]; + j = lowbits(j + t + key[i % keylen]); + u = me.S[j]; + me.S[i] = u; + me.S[j] = t; + } + + // The "g" method returns the next (count) outputs as one number. + me.g = function getnext(count) { + var s = me.S; + var i = lowbits(me.i + 1); var t = s[i]; + var j = lowbits(me.j + t); var u = s[j]; + s[i] = u; + s[j] = t; + var r = s[lowbits(t + u)]; + while (--count) { + i = lowbits(i + 1); t = s[i]; + j = lowbits(j + t); u = s[j]; + s[i] = u; + s[j] = t; + r = r * width + s[lowbits(t + u)]; + } + me.i = i; + me.j = j; + return r; + }; + // For robust unpredictability discard an initial batch of values. + // See http://www.rsa.com/rsalabs/node.asp?id=2009 + me.g(width); +} + +// +// flatten() +// Converts an object tree to nested arrays of strings. +// +/** @param {Object=} result + * @param {string=} prop + * @param {string=} typ */ +function flatten(obj, depth, result, prop, typ) { + result = []; + typ = typeof(obj); + if (depth && typ == 'object') { + for (prop in obj) { + if (prop.indexOf('S') < 5) { // Avoid FF3 bug (local/sessionStorage) + try { result.push(flatten(obj[prop], depth - 1)); } catch (e) {} + } + } + } + return (result.length ? result : obj + (typ != 'string' ? '\0' : '')); +} + +// +// mixkey() +// Mixes a string seed into a key that is an array of integers, and +// returns a shortened string seed that is equivalent to the result key. +// +/** @param {number=} smear + * @param {number=} j */ +function mixkey(seed, key, smear, j) { + seed += ''; // Ensure the seed is a string + smear = 0; + for (j = 0; j < seed.length; j++) { + key[lowbits(j)] = + lowbits((smear ^= key[lowbits(j)] * 19) + seed.charCodeAt(j)); + } + seed = ''; + for (j in key) { seed += String.fromCharCode(key[j]); } + return seed; +} + +// +// lowbits() +// A quick "n mod width" for width a power of 2. +// +function lowbits(n) { return n & (width - 1); } + +// +// The following constants are related to IEEE 754 limits. +// +startdenom = math.pow(width, chunks); +significance = math.pow(2, significance); +overflow = significance * 2; + +// +// When seedrandom.js is loaded, we immediately mix a few bits +// from the built-in RNG into the entropy pool. Because we do +// not want to intefere with determinstic PRNG state later, +// seedrandom will not call math.random on its own again after +// initialization. +// +mixkey(math.random(), pool); + +// End anonymous scope, and pass initial values. +}( + [], // pool: entropy pool starts empty + numeric.seedrandom, // math: package containing random, pow, and seedrandom + 256, // width: each RC4 output is 0 <= x < 256 + 6, // chunks: at least six RC4 outputs for each double + 52 // significance: there are 52 significant digits in a double + )); +/* This file is a slightly modified version of quadprog.js from Alberto Santini. + * It has been slightly modified by Sébastien Loisel to make sure that it handles + * 0-based Arrays instead of 1-based Arrays. + * License is in resources/LICENSE.quadprog */ +(function(exports) { + +function base0to1(A) { + if(typeof A !== "object") { return A; } + var ret = [], i,n=A.length; + for(i=0;i meq) { + work[l] = sum; + } else { + work[l] = -Math.abs(sum); + if (sum > 0) { + for (j = 1; j <= n; j = j + 1) { + amat[j][i] = -amat[j][i]; + } + bvec[i] = -bvec[i]; + } + } + } + + for (i = 1; i <= nact; i = i + 1) { + work[iwsv + iact[i]] = 0; + } + + nvl = 0; + temp = 0; + for (i = 1; i <= q; i = i + 1) { + if (work[iwsv + i] < temp * work[iwnbv + i]) { + nvl = i; + temp = work[iwsv + i] / work[iwnbv + i]; + } + } + if (nvl === 0) { + return 999; + } + + return 0; + } + + function fn_goto_55() { + for (i = 1; i <= n; i = i + 1) { + sum = 0; + for (j = 1; j <= n; j = j + 1) { + sum = sum + dmat[j][i] * amat[j][nvl]; + } + work[i] = sum; + } + + l1 = iwzv; + for (i = 1; i <= n; i = i + 1) { + work[l1 + i] = 0; + } + for (j = nact + 1; j <= n; j = j + 1) { + for (i = 1; i <= n; i = i + 1) { + work[l1 + i] = work[l1 + i] + dmat[i][j] * work[j]; + } + } + + t1inf = true; + for (i = nact; i >= 1; i = i - 1) { + sum = work[i]; + l = iwrm + (i * (i + 3)) / 2; + l1 = l - i; + for (j = i + 1; j <= nact; j = j + 1) { + sum = sum - work[l] * work[iwrv + j]; + l = l + j; + } + sum = sum / work[l1]; + work[iwrv + i] = sum; + if (iact[i] < meq) { + // continue; + break; + } + if (sum < 0) { + // continue; + break; + } + t1inf = false; + it1 = i; + } + + if (!t1inf) { + t1 = work[iwuv + it1] / work[iwrv + it1]; + for (i = 1; i <= nact; i = i + 1) { + if (iact[i] < meq) { + // continue; + break; + } + if (work[iwrv + i] < 0) { + // continue; + break; + } + temp = work[iwuv + i] / work[iwrv + i]; + if (temp < t1) { + t1 = temp; + it1 = i; + } + } + } + + sum = 0; + for (i = iwzv + 1; i <= iwzv + n; i = i + 1) { + sum = sum + work[i] * work[i]; + } + if (Math.abs(sum) <= vsmall) { + if (t1inf) { + ierr[1] = 1; + // GOTO 999 + return 999; + } else { + for (i = 1; i <= nact; i = i + 1) { + work[iwuv + i] = work[iwuv + i] - t1 * work[iwrv + i]; + } + work[iwuv + nact + 1] = work[iwuv + nact + 1] + t1; + // GOTO 700 + return 700; + } + } else { + sum = 0; + for (i = 1; i <= n; i = i + 1) { + sum = sum + work[iwzv + i] * amat[i][nvl]; + } + tt = -work[iwsv + nvl] / sum; + t2min = true; + if (!t1inf) { + if (t1 < tt) { + tt = t1; + t2min = false; + } + } + + for (i = 1; i <= n; i = i + 1) { + sol[i] = sol[i] + tt * work[iwzv + i]; + if (Math.abs(sol[i]) < vsmall) { + sol[i] = 0; + } + } + + crval[1] = crval[1] + tt * sum * (tt / 2 + work[iwuv + nact + 1]); + for (i = 1; i <= nact; i = i + 1) { + work[iwuv + i] = work[iwuv + i] - tt * work[iwrv + i]; + } + work[iwuv + nact + 1] = work[iwuv + nact + 1] + tt; + + if (t2min) { + nact = nact + 1; + iact[nact] = nvl; + + l = iwrm + ((nact - 1) * nact) / 2 + 1; + for (i = 1; i <= nact - 1; i = i + 1) { + work[l] = work[i]; + l = l + 1; + } + + if (nact === n) { + work[l] = work[n]; + } else { + for (i = n; i >= nact + 1; i = i - 1) { + if (work[i] === 0) { + // continue; + break; + } + gc = Math.max(Math.abs(work[i - 1]), Math.abs(work[i])); + gs = Math.min(Math.abs(work[i - 1]), Math.abs(work[i])); + if (work[i - 1] >= 0) { + temp = Math.abs(gc * Math.sqrt(1 + gs * gs / (gc * gc))); + } else { + temp = -Math.abs(gc * Math.sqrt(1 + gs * gs / (gc * gc))); + } + gc = work[i - 1] / temp; + gs = work[i] / temp; + + if (gc === 1) { + // continue; + break; + } + if (gc === 0) { + work[i - 1] = gs * temp; + for (j = 1; j <= n; j = j + 1) { + temp = dmat[j][i - 1]; + dmat[j][i - 1] = dmat[j][i]; + dmat[j][i] = temp; + } + } else { + work[i - 1] = temp; + nu = gs / (1 + gc); + for (j = 1; j <= n; j = j + 1) { + temp = gc * dmat[j][i - 1] + gs * dmat[j][i]; + dmat[j][i] = nu * (dmat[j][i - 1] + temp) - dmat[j][i]; + dmat[j][i - 1] = temp; + + } + } + } + work[l] = work[nact]; + } + } else { + sum = -bvec[nvl]; + for (j = 1; j <= n; j = j + 1) { + sum = sum + sol[j] * amat[j][nvl]; + } + if (nvl > meq) { + work[iwsv + nvl] = sum; + } else { + work[iwsv + nvl] = -Math.abs(sum); + if (sum > 0) { + for (j = 1; j <= n; j = j + 1) { + amat[j][nvl] = -amat[j][nvl]; + } + bvec[nvl] = -bvec[nvl]; + } + } + // GOTO 700 + return 700; + } + } + + return 0; + } + + function fn_goto_797() { + l = iwrm + (it1 * (it1 + 1)) / 2 + 1; + l1 = l + it1; + if (work[l1] === 0) { + // GOTO 798 + return 798; + } + gc = Math.max(Math.abs(work[l1 - 1]), Math.abs(work[l1])); + gs = Math.min(Math.abs(work[l1 - 1]), Math.abs(work[l1])); + if (work[l1 - 1] >= 0) { + temp = Math.abs(gc * Math.sqrt(1 + gs * gs / (gc * gc))); + } else { + temp = -Math.abs(gc * Math.sqrt(1 + gs * gs / (gc * gc))); + } + gc = work[l1 - 1] / temp; + gs = work[l1] / temp; + + if (gc === 1) { + // GOTO 798 + return 798; + } + if (gc === 0) { + for (i = it1 + 1; i <= nact; i = i + 1) { + temp = work[l1 - 1]; + work[l1 - 1] = work[l1]; + work[l1] = temp; + l1 = l1 + i; + } + for (i = 1; i <= n; i = i + 1) { + temp = dmat[i][it1]; + dmat[i][it1] = dmat[i][it1 + 1]; + dmat[i][it1 + 1] = temp; + } + } else { + nu = gs / (1 + gc); + for (i = it1 + 1; i <= nact; i = i + 1) { + temp = gc * work[l1 - 1] + gs * work[l1]; + work[l1] = nu * (work[l1 - 1] + temp) - work[l1]; + work[l1 - 1] = temp; + l1 = l1 + i; + } + for (i = 1; i <= n; i = i + 1) { + temp = gc * dmat[i][it1] + gs * dmat[i][it1 + 1]; + dmat[i][it1 + 1] = nu * (dmat[i][it1] + temp) - dmat[i][it1 + 1]; + dmat[i][it1] = temp; + } + } + + return 0; + } + + function fn_goto_798() { + l1 = l - it1; + for (i = 1; i <= it1; i = i + 1) { + work[l1] = work[l]; + l = l + 1; + l1 = l1 + 1; + } + + work[iwuv + it1] = work[iwuv + it1 + 1]; + iact[it1] = iact[it1 + 1]; + it1 = it1 + 1; + if (it1 < nact) { + // GOTO 797 + return 797; + } + + return 0; + } + + function fn_goto_799() { + work[iwuv + nact] = work[iwuv + nact + 1]; + work[iwuv + nact + 1] = 0; + iact[nact] = 0; + nact = nact - 1; + iter[2] = iter[2] + 1; + + return 0; + } + + go = 0; + while (true) { + go = fn_goto_50(); + if (go === 999) { + return; + } + while (true) { + go = fn_goto_55(); + if (go === 0) { + break; + } + if (go === 999) { + return; + } + if (go === 700) { + if (it1 === nact) { + fn_goto_799(); + } else { + while (true) { + fn_goto_797(); + go = fn_goto_798(); + if (go !== 797) { + break; + } + } + fn_goto_799(); + } + } + } + } + +} + +function solveQP(Dmat, dvec, Amat, bvec, meq, factorized) { + Dmat = base0to1(Dmat); + dvec = base0to1(dvec); + Amat = base0to1(Amat); + var i, n, q, + nact, r, + crval = [], iact = [], sol = [], work = [], iter = [], + message; + + meq = meq || 0; + factorized = factorized ? base0to1(factorized) : [undefined, 0]; + bvec = bvec ? base0to1(bvec) : []; + + // In Fortran the array index starts from 1 + n = Dmat.length - 1; + q = Amat[1].length - 1; + + if (!bvec) { + for (i = 1; i <= q; i = i + 1) { + bvec[i] = 0; + } + } + for (i = 1; i <= q; i = i + 1) { + iact[i] = 0; + } + nact = 0; + r = Math.min(n, q); + for (i = 1; i <= n; i = i + 1) { + sol[i] = 0; + } + crval[1] = 0; + for (i = 1; i <= (2 * n + (r * (r + 5)) / 2 + 2 * q + 1); i = i + 1) { + work[i] = 0; + } + for (i = 1; i <= 2; i = i + 1) { + iter[i] = 0; + } + + qpgen2(Dmat, dvec, n, n, sol, crval, Amat, + bvec, n, q, meq, iact, nact, iter, work, factorized); + + message = ""; + if (factorized[1] === 1) { + message = "constraints are inconsistent, no solution!"; + } + if (factorized[1] === 2) { + message = "matrix D in quadratic function is not positive definite!"; + } + + return { + solution: base1to0(sol), + value: base1to0(crval), + unconstrained_solution: base1to0(dvec), + iterations: base1to0(iter), + iact: base1to0(iact), + message: message + }; +} +exports.solveQP = solveQP; +}(numeric)); +/* +Shanti Rao sent me this routine by private email. I had to modify it +slightly to work on Arrays instead of using a Matrix object. +It is apparently translated from http://stitchpanorama.sourceforge.net/Python/svd.py +*/ + +numeric.svd= function svd(A) { + var temp; +//Compute the thin SVD from G. H. Golub and C. Reinsch, Numer. Math. 14, 403-420 (1970) + var prec= numeric.epsilon; //Math.pow(2,-52) // assumes double prec + var tolerance= 1.e-64/prec; + var itmax= 50; + var c=0; + var i=0; + var j=0; + var k=0; + var l=0; + + var u= numeric.clone(A); + var m= u.length; + + var n= u[0].length; + + if (m < n) throw "Need more rows than columns" + + var e = new Array(n); + var q = new Array(n); + for (i=0; i b) + return a*Math.sqrt(1.0+(b*b/a/a)) + else if (b == 0.0) + return a + return b*Math.sqrt(1.0+(a*a/b/b)) + } + + //Householder's reduction to bidiagonal form + + var f= 0.0; + var g= 0.0; + var h= 0.0; + var x= 0.0; + var y= 0.0; + var z= 0.0; + var s= 0.0; + + for (i=0; i < n; i++) + { + e[i]= g; + s= 0.0; + l= i+1; + for (j=i; j < m; j++) + s += (u[j][i]*u[j][i]); + if (s <= tolerance) + g= 0.0; + else + { + f= u[i][i]; + g= Math.sqrt(s); + if (f >= 0.0) g= -g; + h= f*g-s + u[i][i]=f-g; + for (j=l; j < n; j++) + { + s= 0.0 + for (k=i; k < m; k++) + s += u[k][i]*u[k][j] + f= s/h + for (k=i; k < m; k++) + u[k][j]+=f*u[k][i] + } + } + q[i]= g + s= 0.0 + for (j=l; j < n; j++) + s= s + u[i][j]*u[i][j] + if (s <= tolerance) + g= 0.0 + else + { + f= u[i][i+1] + g= Math.sqrt(s) + if (f >= 0.0) g= -g + h= f*g - s + u[i][i+1] = f-g; + for (j=l; j < n; j++) e[j]= u[i][j]/h + for (j=l; j < m; j++) + { + s=0.0 + for (k=l; k < n; k++) + s += (u[j][k]*u[i][k]) + for (k=l; k < n; k++) + u[j][k]+=s*e[k] + } + } + y= Math.abs(q[i])+Math.abs(e[i]) + if (y>x) + x=y + } + + // accumulation of right hand gtransformations + for (i=n-1; i != -1; i+= -1) + { + if (g != 0.0) + { + h= g*u[i][i+1] + for (j=l; j < n; j++) + v[j][i]=u[i][j]/h + for (j=l; j < n; j++) + { + s=0.0 + for (k=l; k < n; k++) + s += u[i][k]*v[k][j] + for (k=l; k < n; k++) + v[k][j]+=(s*v[k][i]) + } + } + for (j=l; j < n; j++) + { + v[i][j] = 0; + v[j][i] = 0; + } + v[i][i] = 1; + g= e[i] + l= i + } + + // accumulation of left hand transformations + for (i=n-1; i != -1; i+= -1) + { + l= i+1 + g= q[i] + for (j=l; j < n; j++) + u[i][j] = 0; + if (g != 0.0) + { + h= u[i][i]*g + for (j=l; j < n; j++) + { + s=0.0 + for (k=l; k < m; k++) s += u[k][i]*u[k][j]; + f= s/h + for (k=i; k < m; k++) u[k][j]+=f*u[k][i]; + } + for (j=i; j < m; j++) u[j][i] = u[j][i]/g; + } + else + for (j=i; j < m; j++) u[j][i] = 0; + u[i][i] += 1; + } + + // diagonalization of the bidiagonal form + prec= prec*x + for (k=n-1; k != -1; k+= -1) + { + for (var iteration=0; iteration < itmax; iteration++) + { // test f splitting + var test_convergence = false + for (l=k; l != -1; l+= -1) + { + if (Math.abs(e[l]) <= prec) + { test_convergence= true + break + } + if (Math.abs(q[l-1]) <= prec) + break + } + if (!test_convergence) + { // cancellation of e[l] if l>0 + c= 0.0 + s= 1.0 + var l1= l-1 + for (i =l; i= itmax-1) + throw 'Error: no convergence.' + // shift from bottom 2x2 minor + x= q[l] + y= q[k-1] + g= e[k-1] + h= e[k] + f= ((y-z)*(y+z)+(g-h)*(g+h))/(2.0*h*y) + g= pythag(f,1.0) + if (f < 0.0) + f= ((x-z)*(x+z)+h*(y/(f-g)-h))/x + else + f= ((x-z)*(x+z)+h*(y/(f+g)-h))/x + // next QR transformation + c= 1.0 + s= 1.0 + for (i=l+1; i< k+1; i++) + { + g= e[i] + y= q[i] + h= s*g + g= c*g + z= pythag(f,h) + e[i-1]= z + c= f/z + s= h/z + f= x*c+g*s + g= -x*s+g*c + h= y*s + y= y*c + for (j=0; j < n; j++) + { + x= v[j][i-1] + z= v[j][i] + v[j][i-1] = x*c+z*s + v[j][i] = -x*s+z*c + } + z= pythag(f,h) + q[i-1]= z + c= f/z + s= h/z + f= c*g+s*y + x= -s*g+c*y + for (j=0; j < m; j++) + { + y= u[j][i-1] + z= u[j][i] + u[j][i-1] = y*c+z*s + u[j][i] = -y*s+z*c + } + } + e[l]= 0.0 + e[k]= f + q[k]= x + } + } + + //vt= transpose(v) + //return (u,q,vt) + for (i=0;i= 0; j--) + { + if (q[j] < q[i]) + { + // writeln(i,'-',j) + c = q[j] + q[j] = q[i] + q[i] = c + for(k=0;k tensor.name) : + Object.keys(tensors); + for (let i = 0; i < names.length; ++i) { + const name = names[i]; + const t = Array.isArray(tensors) ? tensors[i].tensor : tensors[name]; + if (t.dtype !== 'float32' && t.dtype !== 'int32' && t.dtype !== 'bool' && + t.dtype !== 'string' && t.dtype !== 'complex64') { + throw new Error(`Unsupported dtype in weight '${name}': ${t.dtype}`); + } + const spec = { name, shape: t.shape, dtype: t.dtype }; + if (t.dtype === 'string') { + const utf8bytes = new Promise(async (resolve) => { + const vals = await t.bytes(); + const totalNumBytes = vals.reduce((p, c) => p + c.length, 0) + + NUM_BYTES_STRING_LENGTH * vals.length; + const bytes = new Uint8Array(totalNumBytes); + let offset = 0; + for (let i = 0; i < vals.length; i++) { + const val = vals[i]; + const bytesOfLength = new Uint8Array(new Uint32Array([val.length]).buffer); + bytes.set(bytesOfLength, offset); + offset += NUM_BYTES_STRING_LENGTH; + bytes.set(val, offset); + offset += val.length; + } + resolve(bytes); + }); + dataPromises.push(utf8bytes); + } + else { + dataPromises.push(t.data()); + } + if (group != null) { + spec.group = group; + } + specs.push(spec); + } + const tensorValues = await Promise.all(dataPromises); + return { data: concatenateTypedArrays(tensorValues), specs }; +} +/** + * Decode flat ArrayBuffer as weights. + * + * This function does not handle sharding. + * + * This function is the reverse of `encodeWeights`. + * + * @param buffer A flat ArrayBuffer carrying the binary values of the tensors + * concatenated in the order specified in `specs`. + * @param specs Specifications of the names, dtypes and shapes of the tensors + * whose value are encoded by `buffer`. + * @return A map from tensor name to tensor value, with the names corresponding + * to names in `specs`. + * @throws Error, if any of the tensors has unsupported dtype. + */ +function decodeWeights(buffer, specs) { + // TODO(adarob, cais): Support quantization. + const out = {}; + let float16Decode; + let offset = 0; + for (const spec of specs) { + const name = spec.name; + const dtype = spec.dtype; + const shape = spec.shape; + const size = Object(_util__WEBPACK_IMPORTED_MODULE_2__["sizeFromShape"])(shape); + let values; + if ('quantization' in spec) { + const quantization = spec.quantization; + if (quantization.dtype === 'uint8' || quantization.dtype === 'uint16') { + if (!('min' in quantization && 'scale' in quantization)) { + throw new Error(`Weight ${spec.name} with quantization ${quantization.dtype} ` + + `doesn't have corresponding metadata min and scale.`); + } + } + else if (quantization.dtype === 'float16') { + if (dtype !== 'float32') { + throw new Error(`Weight ${spec.name} is quantized with ${quantization.dtype} ` + + `which only supports weights of type float32 not ${dtype}.`); + } + } + else { + throw new Error(`Weight ${spec.name} has unknown ` + + `quantization dtype ${quantization.dtype}. ` + + `Supported quantization dtypes are: ` + + `'uint8', 'uint16', and 'float16'.`); + } + const quantizationSizeFactor = _types__WEBPACK_IMPORTED_MODULE_3__[/* DTYPE_VALUE_SIZE_MAP */ "a"][quantization.dtype]; + const byteBuffer = buffer.slice(offset, offset + size * quantizationSizeFactor); + const quantizedArray = (quantization.dtype === 'uint8') ? + new Uint8Array(byteBuffer) : + new Uint16Array(byteBuffer); + if (dtype === 'float32') { + if (quantization.dtype === 'uint8' || quantization.dtype === 'uint16') { + values = new Float32Array(quantizedArray.length); + for (let i = 0; i < quantizedArray.length; i++) { + const v = quantizedArray[i]; + values[i] = v * quantization.scale + quantization.min; + } + } + else if (quantization.dtype === 'float16') { + if (float16Decode === undefined) { + float16Decode = getFloat16Decoder(); + } + values = float16Decode(quantizedArray); + } + else { + throw new Error(`Unsupported quantization type ${quantization.dtype} ` + + `for weight type float32.`); + } + } + else if (dtype === 'int32') { + if (quantization.dtype !== 'uint8' && quantization.dtype !== 'uint16') { + throw new Error(`Unsupported quantization type ${quantization.dtype} ` + + `for weight type int32.`); + } + values = new Int32Array(quantizedArray.length); + for (let i = 0; i < quantizedArray.length; i++) { + const v = quantizedArray[i]; + values[i] = Math.round(v * quantization.scale + quantization.min); + } + } + else { + throw new Error(`Unsupported dtype in weight '${name}': ${dtype}`); + } + offset += size * quantizationSizeFactor; + } + else if (dtype === 'string') { + const size = Object(_util__WEBPACK_IMPORTED_MODULE_2__["sizeFromShape"])(spec.shape); + values = []; + for (let i = 0; i < size; i++) { + const byteLength = new Uint32Array(buffer.slice(offset, offset + NUM_BYTES_STRING_LENGTH))[0]; + offset += NUM_BYTES_STRING_LENGTH; + const bytes = new Uint8Array(buffer.slice(offset, offset + byteLength)); + values.push(bytes); + offset += byteLength; + } + } + else { + const dtypeFactor = _types__WEBPACK_IMPORTED_MODULE_3__[/* DTYPE_VALUE_SIZE_MAP */ "a"][dtype]; + const byteBuffer = buffer.slice(offset, offset + size * dtypeFactor); + if (dtype === 'float32') { + values = new Float32Array(byteBuffer); + } + else if (dtype === 'int32') { + values = new Int32Array(byteBuffer); + } + else if (dtype === 'bool') { + values = new Uint8Array(byteBuffer); + } + else if (dtype === 'complex64') { + values = new Float32Array(byteBuffer); + const real = new Float32Array(values.length / 2); + const image = new Float32Array(values.length / 2); + for (let i = 0; i < real.length; i++) { + real[i] = values[i * 2]; + image[i] = values[i * 2 + 1]; + } + const realTensor = Object(_ops_tensor_ops__WEBPACK_IMPORTED_MODULE_1__[/* tensor */ "f"])(real, shape, 'float32'); + const imageTensor = Object(_ops_tensor_ops__WEBPACK_IMPORTED_MODULE_1__[/* tensor */ "f"])(image, shape, 'float32'); + out[name] = Object(_ops_complex__WEBPACK_IMPORTED_MODULE_0__[/* complex */ "a"])(realTensor, imageTensor); + } + else { + throw new Error(`Unsupported dtype in weight '${name}': ${dtype}`); + } + offset += size * dtypeFactor; + } + if (dtype !== 'complex64') { + out[name] = Object(_ops_tensor_ops__WEBPACK_IMPORTED_MODULE_1__[/* tensor */ "f"])(values, shape, dtype); + } + } + return out; +} +/** + * Concatenate TypedArrays into an ArrayBuffer. + */ +function concatenateTypedArrays(xs) { + // TODO(adarob, cais): Support quantization. + if (xs === null) { + throw new Error(`Invalid input value: ${JSON.stringify(xs)}`); + } + let totalByteLength = 0; + // `normalizedXs` is here for this reason: a `TypedArray`'s `buffer' + // can have a different byte length from that of the `TypedArray` itself, + // for example, when the `TypedArray` is created from an offset in an + // `ArrayBuffer`. `normliazedXs` holds `TypedArray`s whose `buffer`s match + // the `TypedArray` in byte length. If an element of `xs` does not show + // this property, a new `TypedArray` that satisfy this property will be + // constructed and pushed into `normalizedXs`. + const normalizedXs = []; + xs.forEach((x) => { + totalByteLength += x.byteLength; + // tslint:disable:no-any + normalizedXs.push(x.byteLength === x.buffer.byteLength ? x : + new x.constructor(x)); + if (!(x instanceof Float32Array || x instanceof Int32Array || + x instanceof Uint8Array)) { + throw new Error(`Unsupported TypedArray subtype: ${x.constructor.name}`); + } + // tslint:enable:no-any + }); + const y = new Uint8Array(totalByteLength); + let offset = 0; + normalizedXs.forEach((x) => { + y.set(new Uint8Array(x.buffer), offset); + offset += x.byteLength; + }); + return y.buffer; +} +// Use Buffer on Node.js instead of Blob/atob/btoa +const useNodeBuffer = typeof Buffer !== 'undefined' && + (typeof Blob === 'undefined' || typeof atob === 'undefined' || + typeof btoa === 'undefined'); +/** + * Calculate the byte length of a JavaScript string. + * + * Note that a JavaScript string can contain wide characters, therefore the + * length of the string is not necessarily equal to the byte length. + * + * @param str Input string. + * @returns Byte length. + */ +function stringByteLength(str) { + if (useNodeBuffer) { + return Buffer.byteLength(str); + } + return new Blob([str]).size; +} +/** + * Encode an ArrayBuffer as a base64 encoded string. + * + * @param buffer `ArrayBuffer` to be converted. + * @returns A string that base64-encodes `buffer`. + */ +function arrayBufferToBase64String(buffer) { + if (useNodeBuffer) { + return Buffer.from(buffer).toString('base64'); + } + const buf = new Uint8Array(buffer); + let s = ''; + for (let i = 0, l = buf.length; i < l; i++) { + s += String.fromCharCode(buf[i]); + } + return btoa(s); +} +/** + * Decode a base64 string as an ArrayBuffer. + * + * @param str Base64 string. + * @returns Decoded `ArrayBuffer`. + */ +function base64StringToArrayBuffer(str) { + if (useNodeBuffer) { + const buf = Buffer.from(str, 'base64'); + return buf.buffer.slice(buf.byteOffset, buf.byteOffset + buf.byteLength); + } + const s = atob(str); + const buffer = new Uint8Array(s.length); + for (let i = 0; i < s.length; ++i) { + buffer.set([s.charCodeAt(i)], i); + } + return buffer.buffer; +} +/** + * Concatenate a number of ArrayBuffers into one. + * + * @param buffers A number of array buffers to concatenate. + * @returns Result of concatenating `buffers` in order. + */ +function concatenateArrayBuffers(buffers) { + if (buffers.length === 1) { + return buffers[0]; + } + let totalByteLength = 0; + buffers.forEach((buffer) => { + totalByteLength += buffer.byteLength; + }); + const temp = new Uint8Array(totalByteLength); + let offset = 0; + buffers.forEach((buffer) => { + temp.set(new Uint8Array(buffer), offset); + offset += buffer.byteLength; + }); + return temp.buffer; +} +/** + * Get the basename of a path. + * + * Behaves in a way analogous to Linux's basename command. + * + * @param path + */ +function basename(path) { + const SEPARATOR = '/'; + path = path.trim(); + while (path.endsWith(SEPARATOR)) { + path = path.slice(0, path.length - 1); + } + const items = path.split(SEPARATOR); + return items[items.length - 1]; +} +/** + * Populate ModelArtifactsInfo fields for a model with JSON topology. + * @param modelArtifacts + * @returns A ModelArtifactsInfo object. + */ +function getModelArtifactsInfoForJSON(modelArtifacts) { + if (modelArtifacts.modelTopology instanceof ArrayBuffer) { + throw new Error('Expected JSON model topology, received ArrayBuffer.'); + } + return { + dateSaved: new Date(), + modelTopologyType: 'JSON', + modelTopologyBytes: modelArtifacts.modelTopology == null ? + 0 : + stringByteLength(JSON.stringify(modelArtifacts.modelTopology)), + weightSpecsBytes: modelArtifacts.weightSpecs == null ? + 0 : + stringByteLength(JSON.stringify(modelArtifacts.weightSpecs)), + weightDataBytes: modelArtifacts.weightData == null ? + 0 : + modelArtifacts.weightData.byteLength, + }; +} +/** + * Computes mantisa table for casting Float16 to Float32 + * See http://www.fox-toolkit.org/ftp/fasthalffloatconversion.pdf + * + * @returns Uint32Array, 2048 mantissa lookup values. + */ +function computeFloat16MantisaTable() { + const convertMantissa = (i) => { + let m = i << 13; + let e = 0; + while ((m & 0x00800000) === 0) { + e -= 0x00800000; + m <<= 1; + } + m &= ~0x00800000; + e += 0x38800000; + return m | e; + }; + const mantisaTable = new Uint32Array(2048); + mantisaTable[0] = 0; + for (let i = 1; i < 1024; i++) { + mantisaTable[i] = convertMantissa(i); + } + for (let i = 1024; i < 2048; i++) { + mantisaTable[i] = 0x38000000 + ((i - 1024) << 13); + } + return mantisaTable; +} +/** + * Computes exponent table for casting Float16 to Float32 + * See http://www.fox-toolkit.org/ftp/fasthalffloatconversion.pdf + * + * @returns Uint32Array, 64 exponent lookup values. + */ +function computeFloat16ExponentTable() { + const exponentTable = new Uint32Array(64); + exponentTable[0] = 0; + exponentTable[31] = 0x47800000; + exponentTable[32] = 0x80000000; + exponentTable[63] = 0xc7800000; + for (let i = 1; i < 31; i++) { + exponentTable[i] = i << 23; + } + for (let i = 33; i < 63; i++) { + exponentTable[i] = 0x80000000 + ((i - 32) << 23); + } + return exponentTable; +} +/** + * Computes offset table for casting Float16 to Float32 + * See http://www.fox-toolkit.org/ftp/fasthalffloatconversion.pdf + * + * @returns Uint32Array, 6d offset values. + */ +function computeFloat16OffsetTable() { + const offsetTable = new Uint32Array(64); + for (let i = 0; i < 64; i++) { + offsetTable[i] = 1024; + } + offsetTable[0] = offsetTable[32] = 0; + return offsetTable; +} +/** + * Retrieve a Float16 decoder which will decode a ByteArray of Float16 values + * to a Float32Array. + * + * @returns Function (buffer: Uint16Array) => Float32Array which decodes + * the Uint16Array of Float16 bytes to a Float32Array. + */ +function getFloat16Decoder() { + // Algorithm is based off of http://www.fox-toolkit.org/ftp/fasthalffloatconversion.pdf + // Cache lookup tables + const mantisaTable = computeFloat16MantisaTable(); + const exponentTable = computeFloat16ExponentTable(); + const offsetTable = computeFloat16OffsetTable(); + return (quantizedArray) => { + const buffer = new ArrayBuffer(4 * quantizedArray.length); + const bufferUint32View = new Uint32Array(buffer); + for (let index = 0; index < quantizedArray.length; index++) { + const float16Bits = quantizedArray[index]; + const float32Bits = mantisaTable[offsetTable[float16Bits >> 10] + (float16Bits & 0x3ff)] + + exponentTable[float16Bits >> 10]; + bufferUint32View[index] = float32Bits; + } + return new Float32Array(buffer); + }; +} +//# sourceMappingURL=io_utils.js.map +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(39).Buffer)) + +/***/ }), +/* 14 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; + +// EXPORTS +__webpack_require__.d(__webpack_exports__, "f", function() { return /* binding */ iteratorFromItems; }); +__webpack_require__.d(__webpack_exports__, "e", function() { return /* binding */ iteratorFromFunction; }); +__webpack_require__.d(__webpack_exports__, "d", function() { return /* binding */ iteratorFromConcatenated; }); +__webpack_require__.d(__webpack_exports__, "g", function() { return /* binding */ iteratorFromZipped; }); +__webpack_require__.d(__webpack_exports__, "a", function() { return /* binding */ lazy_iterator_LazyIterator; }); +__webpack_require__.d(__webpack_exports__, "b", function() { return /* binding */ lazy_iterator_OneToManyIterator; }); +__webpack_require__.d(__webpack_exports__, "c", function() { return /* binding */ ZipMismatchMode; }); + +// UNUSED EXPORTS: iteratorFromIncrementing, iteratorFromConcatenatedFunction, ChainedIterator, PrefetchIterator, ShuffleIterator + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-core/dist/index.js + 269 modules +var dist = __webpack_require__(0); + +// EXTERNAL MODULE: ./node_modules/seedrandom/index.js +var seedrandom = __webpack_require__(20); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-data/dist/util/deep_map.js +var deep_map = __webpack_require__(19); + +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-data/dist/util/deep_clone.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ + + +function deepClone(container) { + return Object(deep_map["b" /* deepMap */])(container, cloneIfTensor); +} +// tslint:disable-next-line: no-any +function cloneIfTensor(item) { + if (item instanceof dist["Tensor"]) { + return ({ value: item.clone(), recurse: false }); + } + else if (Object(deep_map["e" /* isIterable */])(item)) { + return { value: null, recurse: true }; + } + else { + return { value: item, recurse: false }; + } +} +//# sourceMappingURL=deep_clone.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-data/dist/util/ring_buffer.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ +/** + * A ring buffer, providing O(1) FIFO, LIFO, and related operations. + */ +class RingBuffer { + /** + * Constructs a `RingBuffer`. + * @param capacity The number of items that the buffer can accomodate. + */ + constructor(capacity) { + this.capacity = capacity; + // Note we store the indices in the range 0 <= index < 2*capacity. + // This allows us to distinguish the full from the empty case. + // See https://www.snellman.net/blog/archive/2016-12-13-ring-buffers/ + this.begin = 0; // inclusive + this.end = 0; // exclusive + if (capacity == null) { + throw new RangeError('Can\'t create a ring buffer of unknown capacity.'); + } + if (capacity < 1) { + throw new RangeError('Can\'t create ring buffer of capacity < 1.'); + } + this.data = new Array(capacity); + this.doubledCapacity = 2 * capacity; + } + /** + * Map any index into the range 0 <= index < 2*capacity. + */ + wrap(index) { + // don't trust % on negative numbers + while (index < 0) { + index += this.doubledCapacity; + } + return index % this.doubledCapacity; + } + get(index) { + if (index < 0) { + throw new RangeError('Can\'t get item at a negative index.'); + } + return this.data[index % this.capacity]; + } + set(index, value) { + if (index < 0) { + throw new RangeError('Can\'t set item at a negative index.'); + } + this.data[index % this.capacity] = value; + } + /** + * Returns the current number of items in the buffer. + */ + length() { + let length = this.end - this.begin; + if (length < 0) { + length = this.doubledCapacity + length; + } + return length; + } + /** + * Reports whether the buffer is full. + * @returns true if the number of items in the buffer equals its capacity, and + * false otherwise. + */ + isFull() { + return this.length() === this.capacity; + } + /** + * Reports whether the buffer is empty. + * @returns true if the number of items in the buffer equals zero, and + * false otherwise. + */ + isEmpty() { + return this.length() === 0; + } + /** + * Adds an item to the end of the buffer. + */ + push(value) { + if (this.isFull()) { + throw new RangeError('Ring buffer is full.'); + } + this.set(this.end, value); + this.end = this.wrap(this.end + 1); + } + /** + * Adds many items to the end of the buffer, in order. + */ + pushAll(values) { + for (const value of values) { + this.push(value); + } + } + /** + * Removes and returns the last item in the buffer. + */ + pop() { + if (this.isEmpty()) { + throw new RangeError('Ring buffer is empty.'); + } + this.end = this.wrap(this.end - 1); + const result = this.get(this.end); + this.set(this.end, undefined); + return result; + } + /** + * Adds an item to the beginning of the buffer. + */ + unshift(value) { + if (this.isFull()) { + throw new RangeError('Ring buffer is full.'); + } + this.begin = this.wrap(this.begin - 1); + this.set(this.begin, value); + } + /** + * Removes and returns the first item in the buffer. + */ + shift() { + if (this.isEmpty()) { + throw new RangeError('Ring buffer is empty.'); + } + const result = this.get(this.begin); + this.set(this.begin, undefined); + this.begin = this.wrap(this.begin + 1); + return result; + } + /** + * Removes and returns a specific item in the buffer, and moves the last item + * to the vacated slot. This is useful for implementing a shuffling stream. + * Note that this operation necessarily scrambles the original order. + * + * @param relativeIndex: the index of the item to remove, relative to the + * first item in the buffer (e.g., hiding the ring nature of the underlying + * storage). + */ + shuffleExcise(relativeIndex) { + if (this.isEmpty()) { + throw new RangeError('Ring buffer is empty.'); + } + const index = this.wrap(this.begin + relativeIndex); + const result = this.get(index); + this.set(index, this.pop()); + return result; + } +} +//# sourceMappingURL=ring_buffer.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-data/dist/util/growing_ring_buffer.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ + +class growing_ring_buffer_GrowingRingBuffer extends RingBuffer { + /** + * Constructs a `GrowingRingBuffer`. + */ + constructor() { + super(growing_ring_buffer_GrowingRingBuffer.INITIAL_CAPACITY); + } + isFull() { + return false; + } + push(value) { + if (super.isFull()) { + this.expand(); + } + super.push(value); + } + unshift(value) { + if (super.isFull()) { + this.expand(); + } + super.unshift(value); + } + /** + * Doubles the capacity of the buffer. + */ + expand() { + const newCapacity = this.capacity * 2; + const newData = new Array(newCapacity); + const len = this.length(); + // Rotate the buffer to start at index 0 again, since we can't just + // allocate more space at the end. + for (let i = 0; i < len; i++) { + newData[i] = this.get(this.wrap(this.begin + i)); + } + this.data = newData; + this.capacity = newCapacity; + this.doubledCapacity = 2 * this.capacity; + this.begin = 0; + this.end = len; + } +} +growing_ring_buffer_GrowingRingBuffer.INITIAL_CAPACITY = 32; +//# sourceMappingURL=growing_ring_buffer.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-data/dist/iterators/lazy_iterator.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ + + + + + + +// Here we implement a simple asynchronous iterator. +// This lets us avoid using either third-party stream libraries or +// recent TypeScript language support requiring polyfills. +/** + * Create a `LazyIterator` from an array of items. + */ +function iteratorFromItems(items) { + return new lazy_iterator_ArrayIterator(items); +} +/** + * Create a `LazyIterator` of incrementing integers. + */ +function iteratorFromIncrementing(start) { + let i = start; + return iteratorFromFunction(() => ({ value: i++, done: false })); +} +/** + * Create a `LazyIterator` from a function. + * + * ```js + * let i = -1; + * const func = () => + * ++i < 5 ? {value: i, done: false} : {value: null, done: true}; + * const iter = tf.data.iteratorFromFunction(func); + * await iter.forEachAsync(e => console.log(e)); + * ``` + * + * @param func A function that produces data on each call. + */ +function iteratorFromFunction(func) { + return new FunctionCallIterator(func); +} +/** + * Create a `LazyIterator` by concatenating underlying streams, which are + * themselves provided as a stream. + * + * This can also be thought of as a "stream flatten" operation. + * + * @param baseIterators A stream of streams to be concatenated. + * @param baseErrorHandler An optional function that can intercept `Error`s + * raised during a `next()` call on the base stream. This function can decide + * whether the error should be propagated, whether the error should be + * ignored, or whether the base stream should be terminated. + */ +function iteratorFromConcatenated(baseIterators, baseErrorHandler) { + return new ChainedIterator(baseIterators, baseErrorHandler); +} +/** + * Create a `LazyIterator` by concatenating streams produced by calling a + * stream-generating function a given number of times. + * + * Since a `LazyIterator` is read-once, it cannot be repeated, but this + * function can be used to achieve a similar effect: + * + * LazyIterator.ofConcatenatedFunction(() => new MyIterator(), 6); + * + * @param iteratorFunc: A function that produces a new stream on each call. + * @param count: The number of times to call the function. + * @param baseErrorHandler An optional function that can intercept `Error`s + * raised during a `next()` call on the base stream. This function can decide + * whether the error should be propagated, whether the error should be + * ignored, or whether the base stream should be terminated. + */ +function iteratorFromConcatenatedFunction(iteratorFunc, count, baseErrorHandler) { + return iteratorFromConcatenated(iteratorFromFunction(iteratorFunc).take(count), baseErrorHandler); +} +/** + * Create a `LazyIterator` by zipping together an array, dict, or nested + * structure of `LazyIterator`s (and perhaps additional constants). + * + * The underlying streams must provide elements in a consistent order such + * that they correspond. + * + * Typically, the underlying streams should have the same number of + * elements. If they do not, the behavior is determined by the + * `mismatchMode` argument. + * + * The nested structure of the `iterators` argument determines the + * structure of elements in the resulting iterator. + * + * @param iterators: An array or object containing LazyIterators at the + * leaves. + * @param mismatchMode: Determines what to do when one underlying iterator + * is exhausted before the others. `ZipMismatchMode.FAIL` (the default) + * causes an error to be thrown in this case. `ZipMismatchMode.SHORTEST` + * causes the zipped iterator to terminate with the furst underlying + * streams, so elements remaining on the longer streams are ignored. + * `ZipMismatchMode.LONGEST` causes the zipped stream to continue, filling + * in nulls for the exhausted streams, until all streams are exhausted. + */ +function iteratorFromZipped(iterators, mismatchMode = ZipMismatchMode.FAIL) { + return new lazy_iterator_ZipIterator(iterators, mismatchMode); +} +/** + * An asynchronous iterator, providing lazy access to a potentially + * unbounded stream of elements. + * + * Iterator can be obtained from a dataset: + * `const iter = await dataset.iterator();` + */ +class lazy_iterator_LazyIterator { + /** + * Collect all remaining elements of a bounded stream into an array. + * Obviously this will succeed only for small streams that fit in memory. + * Useful for testing. + * + * @returns A Promise for an array of stream elements, which will resolve + * when the stream is exhausted. + */ + async toArray() { + const result = []; + let x = await this.next(); + while (!x.done) { + result.push(x.value); + x = await this.next(); + } + return result; + } + /** + * Collect all elements of this dataset into an array with prefetching 100 + * elements. This is useful for testing, because the prefetch changes the + * order in which the Promises are resolved along the processing pipeline. + * This may help expose bugs where results are dependent on the order of + * Promise resolution rather than on the logical order of the stream (i.e., + * due to hidden mutable state). + * + * @returns A Promise for an array of stream elements, which will resolve + * when the stream is exhausted. + */ + async toArrayForTest() { + const stream = this.prefetch(100); + const result = []; + let x = await stream.next(); + while (!x.done) { + result.push(x.value); + x = await stream.next(); + } + return result; + } + /** + * Draw items from the stream until it is exhausted. + * + * This can be useful when the stream has side effects but no output. In + * that case, calling this function guarantees that the stream will be + * fully processed. + */ + async resolveFully() { + let x = await this.next(); + while (!x.done) { + x = await this.next(); + } + } + /** + * Draw items from the stream until it is exhausted, or a predicate fails. + * + * This can be useful when the stream has side effects but no output. In + * that case, calling this function guarantees that the stream will be + * fully processed. + */ + async resolveWhile(predicate) { + let x = await this.next(); + let shouldContinue = predicate(x.value); + while ((!x.done) && shouldContinue) { + x = await this.next(); + shouldContinue = predicate(x.value); + } + } + /** + * Handles errors thrown on this stream using a provided handler function. + * + * @param handler A function that handles any `Error` thrown during a `next()` + * call and returns true if the stream should continue (dropping the failed + * call) or false if the stream should quietly terminate. If the handler + * itself throws (or rethrows) an `Error`, that will be propagated. + * + * @returns A `LazyIterator` of elements passed through from upstream, + * possibly filtering or terminating on upstream `next()` calls that + * throw an `Error`. + */ + handleErrors(handler) { + return new ErrorHandlingLazyIterator(this, handler); + } + // TODO(soergel): Implement reduce() etc. + /** + * Filters this stream according to `predicate`. + * + * @param predicate A function mapping a stream element to a boolean or a + * `Promise` for one. + * + * @returns A `LazyIterator` of elements for which the predicate was true. + */ + filter(predicate) { + return new lazy_iterator_FilterIterator(this, predicate); + } + /** + * Maps this stream through a 1-to-1 transform. + * + * @param transform A function mapping a stream element to a transformed + * element. + * + * @returns A `LazyIterator` of transformed elements. + */ + map(transform) { + return new lazy_iterator_MapIterator(this, transform); + } + /** + * Maps this stream through an async 1-to-1 transform. + * + * @param transform A function mapping a stream element to a `Promise` for a + * transformed stream element. + * + * @returns A `LazyIterator` of transformed elements. + */ + mapAsync(transform) { + return new lazy_iterator_AsyncMapIterator(this, transform); + } + /** + * Maps this stream through a 1-to-1 transform, forcing serial execution. + * + * @param transform A function mapping a stream element to a transformed + * element. + * + * @returns A `LazyIterator` of transformed elements. + */ + serialMapAsync(transform) { + return new lazy_iterator_AsyncMapIterator(this, transform).serial(); + } + /** + * Maps this stream through a 1-to-many transform. + * + * @param transform A function mapping a stream element to an array of + * transformed elements. + * + * @returns A `DataStream` of transformed elements. + */ + flatmap(transform) { + return new lazy_iterator_FlatmapIterator(this, transform); + } + /** + * Apply a function to every element of the stream. + * + * @param f A function to apply to each stream element. + */ + async forEachAsync(f) { + return this.map(f).resolveFully(); + } + /** + * Apply a function to every element of the stream, forcing serial execution. + * + * @param f A function to apply to each stream element. Should return 'true' + * to indicate that the stream should continue, or 'false' to cause it to + * terminate. + */ + async serialForEach(f) { + return this.serialMapAsync(f).resolveWhile(x => (x === true)); + } + /** + * Groups elements into batches, represented as arrays of elements. + * + * We can think of the elements of this iterator as 'rows' (even if they are + * nested structures). By the same token, consecutive values for a given + * key within the elements form a 'column'. This matches the usual sense of + * 'row' and 'column' when processing tabular data (e.g., parsing a CSV). + * + * Thus, "Row-major" means that the resulting batch is simply a collection of + * rows: `[row1, row2, row3, ...]`. This is contrast to the column-major + * form, which is needed for vectorized computation. + * + * @param batchSize The number of elements desired per batch. + * @param smallLastBatch Whether to emit the final batch when it has fewer + * than batchSize elements. Default true. + * @returns A `LazyIterator` of batches of elements, represented as arrays + * of the original element type. + */ + rowMajorBatch(batchSize, smallLastBatch = true) { + return new RowMajorBatchIterator(this, batchSize, smallLastBatch); + } + /** + * Groups elements into batches, represented in column-major form. + * + * We can think of the elements of this iterator as 'rows' (even if they are + * nested structures). By the same token, consecutive values for a given + * key within the elements form a 'column'. This matches the usual sense of + * 'row' and 'column' when processing tabular data (e.g., parsing a CSV). + * + * Thus, "column-major" means that the resulting batch is a (potentially + * nested) structure representing the columns. Each column entry, then, + * contains a collection of the values found in that column for a range of + * input elements. This representation allows for vectorized computation, in + * contrast to the row-major form. + * + * The inputs should all have the same nested structure (i.e., of arrays and + * dicts). The result is a single object with the same nested structure, + * where the leaves are arrays collecting the values of the inputs at that + * location (or, optionally, the result of a custom function applied to those + * arrays). + * + * @param batchSize The number of elements desired per batch. + * @param smallLastBatch Whether to emit the final batch when it has fewer + * than batchSize elements. Default true. + * @param zipFn: (optional) A function that expects an array of elements at a + * single node of the object tree, and returns a `DeepMapResult`. The + * `DeepMapResult` either provides a result value for that node (i.e., + * representing the subtree), or indicates that the node should be processed + * recursively. The default zipFn recurses as far as possible and places + * arrays at the leaves. + * @returns A `LazyIterator` of batches of elements, represented as an object + * with collections at the leaves. + */ + columnMajorBatch(batchSize, smallLastBatch = true, + // tslint:disable-next-line:no-any + zipFn = deep_map["f" /* zipToList */]) { + // First collect the desired number of input elements as a row-major batch. + const rowBatches = this.rowMajorBatch(batchSize, smallLastBatch); + // Now 'rotate' or 'pivot' the data, collecting all values from each column + // in the batch (i.e., for each key within the elements) into an array. + return rowBatches.map(x => Object(deep_map["d" /* deepZip */])(x, zipFn)); + } + /** + * Concatenate this `LazyIterator` with another. + * + * @param iterator A `LazyIterator` to be concatenated onto this one. + * @param baseErrorHandler An optional function that can intercept `Error`s + * raised during a `next()` call on the base stream. This function can + * decide whether the error should be propagated, whether the error should + * be ignored, or whether the base stream should be terminated. + * @returns A `LazyIterator`. + */ + concatenate(iterator, baseErrorHandler) { + return new ChainedIterator(iteratorFromItems([this, iterator]), baseErrorHandler); + } + /** + * Limits this stream to return at most `count` items. + * + * @param count The maximum number of items to provide from the stream. If + * a negative or undefined value is given, the entire stream is returned + * unaltered. + */ + take(count) { + if (count < 0 || count == null) { + return this; + } + return new TakeIterator(this, count); + } + /** + * Skips the first `count` items in this stream. + * + * @param count The number of items to skip. If a negative or undefined + * value is given, the entire stream is returned unaltered. + */ + skip(count) { + if (count < 0 || count == null) { + return this; + } + return new lazy_iterator_SkipIterator(this, count); + } + /** + * Prefetch the first `bufferSize` items in this stream. + * + * Note this prefetches Promises, but makes no guarantees about when those + * Promises resolve. + * + * @param bufferSize: An integer specifying the number of elements to be + * prefetched. + */ + prefetch(bufferSize) { + return new lazy_iterator_PrefetchIterator(this, bufferSize); + } + // TODO(soergel): deep sharded shuffle, where supported + /** + * Randomly shuffles the elements of this stream. + * + * @param bufferSize: An integer specifying the number of elements from + * this stream from which the new stream will sample. + * @param seed: (Optional.) An integer specifying the random seed that + * will be used to create the distribution. + */ + shuffle(windowSize, seed) { + return new lazy_iterator_ShuffleIterator(this, windowSize, seed); + } + /** + * Force an iterator to execute serially: each next() call will await the + * prior one, so that they cannot execute concurrently. + */ + serial() { + return new SerialIterator(this); + } +} +// ============================================================================ +// The following private classes serve to implement the chainable methods +// on LazyIterator. Unfortunately they can't be placed in separate files, +// due to resulting trouble with circular imports. +// ============================================================================ +// Iterators that just extend LazyIterator directly +// ============================================================================ +class lazy_iterator_ArrayIterator extends lazy_iterator_LazyIterator { + constructor(items) { + super(); + this.items = items; + this.trav = 0; + } + summary() { + return `Array of ${this.items.length} items`; + } + async next() { + if (this.trav >= this.items.length) { + return { value: null, done: true }; + } + const item = this.items[this.trav]; + this.trav++; + return { value: deepClone(item), done: false }; + } +} +class FunctionCallIterator extends lazy_iterator_LazyIterator { + constructor(nextFn) { + super(); + this.nextFn = nextFn; + } + summary() { + return `Function call`; + } + async next() { + try { + return this.nextFn(); + } + catch (e) { + // Modify the error message but leave the stack trace intact + e.message = + `Error thrown while iterating through a dataset: ${e.message}`; + throw e; + } + } +} +class SerialIterator extends lazy_iterator_LazyIterator { + constructor(upstream) { + super(); + this.upstream = upstream; + this.lastRead = Promise.resolve({ value: null, done: false }); + } + summary() { + return `${this.upstream.summary()} -> Serial`; + } + async next() { + // This sets this.lastRead to a new Promise right away, as opposed to + // saying `await this.lastRead; this.lastRead = this.serialNext();` which + // would not work because this.nextRead would be updated only after the + // promise resolves. + this.lastRead = this.lastRead.then(() => this.serialNext()); + return this.lastRead; + } + async serialNext() { + return this.upstream.next(); + } +} +class lazy_iterator_SkipIterator extends lazy_iterator_LazyIterator { + constructor(upstream, maxCount) { + super(); + this.upstream = upstream; + this.maxCount = maxCount; + // Local state that should not be clobbered by out-of-order execution. + this.count = 0; + this.lastRead = Promise.resolve({ value: null, done: false }); + } + summary() { + return `${this.upstream.summary()} -> Skip`; + } + async next() { + // This sets this.lastRead to a new Promise right away, as opposed to + // saying `await this.lastRead; this.lastRead = this.serialNext();` which + // would not work because this.nextRead would be updated only after the + // promise resolves. + this.lastRead = this.lastRead.then(() => this.serialNext()); + return this.lastRead; + } + async serialNext() { + // TODO(soergel): consider tradeoffs of reading in parallel, eg. + // collecting next() promises in an Array and then waiting for + // Promise.all() of those. Benefit: pseudo-parallel execution. Drawback: + // maybe delayed GC. + while (this.count++ < this.maxCount) { + const skipped = await this.upstream.next(); + // short-circuit if upstream is already empty + if (skipped.done) { + return skipped; + } + dist["dispose"](skipped.value); + } + return this.upstream.next(); + } +} +class TakeIterator extends lazy_iterator_LazyIterator { + constructor(upstream, maxCount) { + super(); + this.upstream = upstream; + this.maxCount = maxCount; + this.count = 0; + } + summary() { + return `${this.upstream.summary()} -> Take`; + } + async next() { + if (this.count++ >= this.maxCount) { + return { value: null, done: true }; + } + return this.upstream.next(); + } +} +// Note this batch just groups items into row-wise element arrays. +// Rotating these to a column-wise representation happens only at the dataset +// level. +class RowMajorBatchIterator extends lazy_iterator_LazyIterator { + constructor(upstream, batchSize, enableSmallLastBatch = true) { + super(); + this.upstream = upstream; + this.batchSize = batchSize; + this.enableSmallLastBatch = enableSmallLastBatch; + this.lastRead = Promise.resolve({ value: null, done: false }); + } + summary() { + return `${this.upstream.summary()} -> RowMajorBatch`; + } + async next() { + // This sets this.lastRead to a new Promise right away, as opposed to + // saying `await this.lastRead; this.lastRead = this.serialNext();` which + // would not work because this.nextRead would be updated only after the + // promise resolves. + this.lastRead = this.lastRead.then(() => this.serialNext()); + return this.lastRead; + } + async serialNext() { + const batch = []; + while (batch.length < this.batchSize) { + const item = await this.upstream.next(); + if (item.done) { + if (this.enableSmallLastBatch && batch.length > 0) { + return { value: batch, done: false }; + } + return { value: null, done: true }; + } + batch.push(item.value); + } + return { value: batch, done: false }; + } +} +class lazy_iterator_FilterIterator extends lazy_iterator_LazyIterator { + constructor(upstream, predicate) { + super(); + this.upstream = upstream; + this.predicate = predicate; + this.lastRead = Promise.resolve({ value: null, done: false }); + } + summary() { + return `${this.upstream.summary()} -> Filter`; + } + async next() { + // This sets this.lastRead to a new Promise right away, as opposed to + // saying `await this.lastRead; this.lastRead = this.serialNext();` which + // would not work because this.nextRead would be updated only after the + // promise resolves. + this.lastRead = this.lastRead.then(() => this.serialNext()); + return this.lastRead; + } + async serialNext() { + while (true) { + const item = await this.upstream.next(); + if (item.done || this.predicate(item.value)) { + return item; + } + dist["dispose"](item.value); + } + } +} +class lazy_iterator_MapIterator extends lazy_iterator_LazyIterator { + constructor(upstream, transform) { + super(); + this.upstream = upstream; + this.transform = transform; + } + summary() { + return `${this.upstream.summary()} -> Map`; + } + async next() { + const item = await this.upstream.next(); + if (item.done) { + return { value: null, done: true }; + } + const inputTensors = dist["tensor_util"].getTensorsInContainer(item.value); + // Careful: the transform may mutate the item in place. + // That's why we have to remember the input Tensors above, and then + // below dispose only those that were not passed through to the output. + // Note too that the transform function is responsible for tidying + // any intermediate Tensors. Here we are concerned only about the + // inputs. + const mapped = this.transform(item.value); + const outputTensors = dist["tensor_util"].getTensorsInContainer(mapped); + // TODO(soergel) faster intersection + // TODO(soergel) move to tf.disposeExcept(in, out)? + for (const t of inputTensors) { + if (!dist["tensor_util"].isTensorInList(t, outputTensors)) { + t.dispose(); + } + } + return { value: mapped, done: false }; + } +} +class ErrorHandlingLazyIterator extends lazy_iterator_LazyIterator { + constructor(upstream, handler) { + super(); + this.upstream = upstream; + this.handler = handler; + this.count = 0; + this.lastRead = Promise.resolve({ value: null, done: false }); + } + summary() { + return `${this.upstream.summary()} -> handleErrors`; + } + async next() { + // This sets this.lastRead to a new Promise right away, as opposed to + // saying `await this.lastRead; this.lastRead = this.serialNext();` which + // would not work because this.nextRead would be updated only after the + // promise resolves. + this.lastRead = this.lastRead.then(() => this.serialNext()); + return this.lastRead; + } + async serialNext() { + while (true) { + try { + return await this.upstream.next(); + } + catch (e) { + if (!this.handler(e)) { + return { value: null, done: true }; + } + // If the handler returns true, loop and fetch the next upstream item. + // If the upstream iterator throws an endless stream of errors, and if + // the handler says to ignore them, then we loop forever here. That is + // the correct behavior-- it's up to the handler to decide when to stop. + } + } + } +} +class lazy_iterator_AsyncMapIterator extends lazy_iterator_LazyIterator { + constructor(upstream, transform) { + super(); + this.upstream = upstream; + this.transform = transform; + } + summary() { + return `${this.upstream.summary()} -> AsyncMap`; + } + async next() { + const item = await this.upstream.next(); + if (item.done) { + return { value: null, done: true }; + } + const inputTensors = dist["tensor_util"].getTensorsInContainer(item.value); + // Careful: the transform may mutate the item in place. + // That's why we have to remember the input Tensors above, and then + // below dispose only those that were not passed through to the output. + // Note too that the transform function is responsible for tidying + // any intermediate Tensors. Here we are concerned only about the + // inputs. + const mapped = await this.transform(item.value); + const outputTensors = dist["tensor_util"].getTensorsInContainer(mapped); + // TODO(soergel) faster intersection + // TODO(soergel) move to tf.disposeExcept(in, out)? + for (const t of inputTensors) { + if (!dist["tensor_util"].isTensorInList(t, outputTensors)) { + t.dispose(); + } + } + return { value: mapped, done: false }; + } +} +// Iterators that maintain a queue of pending items +// ============================================================================ +/** + * A base class for transforming streams that operate by maintaining an + * output queue of elements that are ready to return via next(). This is + * commonly required when the transformation is 1-to-many: A call to next() + * may trigger a call to the underlying stream, which will produce many + * mapped elements of this stream-- of which we need to return only one, so + * we have to queue the rest. + */ +class lazy_iterator_OneToManyIterator extends lazy_iterator_LazyIterator { + constructor() { + super(); + this.outputQueue = new growing_ring_buffer_GrowingRingBuffer(); + this.lastRead = Promise.resolve({ value: null, done: false }); + } + async next() { + // This sets this.lastRead to a new Promise right away, as opposed to + // saying `await this.lastRead; this.lastRead = this.serialNext();` which + // would not work because this.nextRead would be updated only after the + // promise resolves. + this.lastRead = this.lastRead.then(() => this.serialNext()); + return this.lastRead; + } + async serialNext() { + // Fetch so that the queue contains at least one item if possible. + // If the upstream source is exhausted, AND there are no items left in + // the output queue, then this stream is also exhausted. + while (this.outputQueue.length() === 0) { + // TODO(soergel): consider parallel reads. + if (!await this.pump()) { + return { value: null, done: true }; + } + } + return { value: this.outputQueue.shift(), done: false }; + } +} +class lazy_iterator_FlatmapIterator extends lazy_iterator_OneToManyIterator { + constructor(upstream, transform) { + super(); + this.upstream = upstream; + this.transform = transform; + } + summary() { + return `${this.upstream.summary()} -> Flatmap`; + } + async pump() { + const item = await this.upstream.next(); + if (item.done) { + return false; + } + const inputTensors = dist["tensor_util"].getTensorsInContainer(item.value); + // Careful: the transform may mutate the item in place. + // that's why we have to remember the input Tensors above, and then + // below dispose only those that were not passed through to the output. + // Note too that the transform function is responsible for tidying any + // intermediate Tensors. Here we are concerned only about the inputs. + const mappedArray = this.transform(item.value); + const outputTensors = dist["tensor_util"].getTensorsInContainer(mappedArray); + this.outputQueue.pushAll(mappedArray); + // TODO(soergel) faster intersection, and deduplicate outputTensors + // TODO(soergel) move to tf.disposeExcept(in, out)? + for (const t of inputTensors) { + if (!dist["tensor_util"].isTensorInList(t, outputTensors)) { + t.dispose(); + } + } + return true; + } +} +/** + * Provides a `LazyIterator` that concatenates a stream of underlying + * streams. + * + * Doing this in a concurrency-safe way requires some trickery. In + * particular, we want this stream to return the elements from the + * underlying streams in the correct order according to when next() was + * called, even if the resulting Promises resolve in a different order. + */ +class ChainedIterator extends lazy_iterator_LazyIterator { + constructor(iterators, baseErrorHandler) { + super(); + this.baseErrorHandler = baseErrorHandler; + // Strict Promise execution order: + // a next() call may not even begin until the previous one completes. + this.lastRead = null; + // Local state that should not be clobbered by out-of-order execution. + this.iterator = null; + this.moreIterators = iterators; + } + summary() { + const upstreamSummaries = 'TODO: fill in upstream of chained summaries'; + return `${upstreamSummaries} -> Chained`; + } + async next() { + this.lastRead = this.readFromChain(this.lastRead); + return this.lastRead; + } + async readFromChain(lastRead) { + // Must await on the previous read since the previous read may have advanced + // the stream of streams, from which we need to read. + // This is unfortunate since we can't parallelize reads. Which means + // prefetching of chained streams is a no-op. + // One solution is to prefetch immediately upstream of this. + await lastRead; + if (this.iterator == null) { + const iteratorResult = await this.moreIterators.next(); + if (iteratorResult.done) { + // No more streams to stream from. + return { value: null, done: true }; + } + this.iterator = iteratorResult.value; + if (this.baseErrorHandler != null) { + this.iterator = this.iterator.handleErrors(this.baseErrorHandler); + } + } + const itemResult = await this.iterator.next(); + if (itemResult.done) { + this.iterator = null; + return this.readFromChain(lastRead); + } + return itemResult; + } +} +var ZipMismatchMode; +(function (ZipMismatchMode) { + ZipMismatchMode[ZipMismatchMode["FAIL"] = 0] = "FAIL"; + ZipMismatchMode[ZipMismatchMode["SHORTEST"] = 1] = "SHORTEST"; + ZipMismatchMode[ZipMismatchMode["LONGEST"] = 2] = "LONGEST"; // use nulls for exhausted streams; use up the longest stream. +})(ZipMismatchMode || (ZipMismatchMode = {})); +/** + * Provides a `LazyIterator` that zips together an array, dict, or nested + * structure of `LazyIterator`s (and perhaps additional constants). + * + * The underlying streams must provide elements in a consistent order such + * that they correspond. + * + * Typically, the underlying streams should have the same number of + * elements. If they do not, the behavior is determined by the + * `mismatchMode` argument. + * + * The nested structure of the `iterators` argument determines the + * structure of elements in the resulting iterator. + * + * Doing this in a concurrency-safe way requires some trickery. In + * particular, we want this stream to return the elements from the + * underlying streams in the correct order according to when next() was + * called, even if the resulting Promises resolve in a different order. + * + * @param iterators: An array or object containing LazyIterators at the + * leaves. + * @param mismatchMode: Determines what to do when one underlying iterator + * is exhausted before the others. `ZipMismatchMode.FAIL` (the default) + * causes an error to be thrown in this case. `ZipMismatchMode.SHORTEST` + * causes the zipped iterator to terminate with the furst underlying + * streams, so elements remaining on the longer streams are ignored. + * `ZipMismatchMode.LONGEST` causes the zipped stream to continue, filling + * in nulls for the exhausted streams, until all streams are exhausted. + */ +class lazy_iterator_ZipIterator extends lazy_iterator_LazyIterator { + constructor(iterators, mismatchMode = ZipMismatchMode.FAIL) { + super(); + this.iterators = iterators; + this.mismatchMode = mismatchMode; + this.count = 0; + this.currentPromise = null; + } + summary() { + const upstreamSummaries = 'TODO: fill in upstream of zip summaries'; + return `{${upstreamSummaries}} -> Zip`; + } + async nextState(afterState) { + // This chaining ensures that the underlying next() are not even called + // before the previous ones have resolved. + await afterState; + // Collect underlying iterator "done" signals as a side effect in + // getNext() + let numIterators = 0; + let iteratorsDone = 0; + function getNext(container) { + if (container instanceof lazy_iterator_LazyIterator) { + const result = container.next(); + return { + value: result.then(x => { + numIterators++; + if (x.done) { + iteratorsDone++; + } + return x.value; + }), + recurse: false + }; + } + else { + return { value: null, recurse: true }; + } + } + const mapped = await Object(deep_map["c" /* deepMapAndAwaitAll */])(this.iterators, getNext); + if (numIterators === iteratorsDone) { + // The streams have all ended. + return { value: null, done: true }; + } + if (iteratorsDone > 0) { + switch (this.mismatchMode) { + case ZipMismatchMode.FAIL: + throw new Error('Zipped streams should have the same length. ' + + `Mismatched at element ${this.count}.`); + case ZipMismatchMode.SHORTEST: + return { value: null, done: true }; + case ZipMismatchMode.LONGEST: + default: + // Continue. The exhausted streams already produced value: null. + } + } + this.count++; + return { value: mapped, done: false }; + } + async next() { + this.currentPromise = this.nextState(this.currentPromise); + return this.currentPromise; + } +} +// Iterators that maintain a ring buffer of pending promises +// ============================================================================ +/** + * A stream that prefetches a given number of items from an upstream source, + * returning them in FIFO order. + * + * Note this prefetches Promises, but makes no guarantees about when those + * Promises resolve. + */ +class lazy_iterator_PrefetchIterator extends lazy_iterator_LazyIterator { + constructor(upstream, bufferSize) { + super(); + this.upstream = upstream; + this.bufferSize = bufferSize; + this.buffer = new RingBuffer(bufferSize); + } + summary() { + return `${this.upstream.summary()} -> Prefetch`; + } + /** + * Refill the prefetch buffer. Returns only after the buffer is full, or + * the upstream source is exhausted. + */ + refill() { + while (!this.buffer.isFull()) { + const v = this.upstream.next(); + this.buffer.push(v); + } + } + next() { + this.refill(); + // This shift will never throw an error because the buffer is always + // full after a refill. If the stream is exhausted, the buffer will be + // full of Promises that will resolve to the end-of-stream signal. + return this.buffer.shift(); + } +} +/** + * A stream that performs a sliding-window random shuffle on an upstream + * source. This is like a `PrefetchIterator` except that the items are + * returned in randomized order. Mixing naturally improves as the buffer + * size increases. + */ +class lazy_iterator_ShuffleIterator extends lazy_iterator_PrefetchIterator { + constructor(upstream, windowSize, seed) { + super(upstream, windowSize); + this.upstream = upstream; + this.windowSize = windowSize; + // Local state that should not be clobbered by out-of-order execution. + this.upstreamExhausted = false; + this.random = seedrandom["alea"](seed || dist["util"].now().toString()); + this.lastRead = Promise.resolve({ value: null, done: false }); + } + async next() { + // This sets this.lastRead to a new Promise right away, as opposed to + // saying `await this.lastRead; this.lastRead = this.serialNext();` which + // would not work because this.nextRead would be updated only after the + // promise resolves. + this.lastRead = this.lastRead.then(() => this.serialNext()); + return this.lastRead; + } + randomInt(max) { + return Math.floor(this.random() * max); + } + chooseIndex() { + return this.randomInt(this.buffer.length()); + } + async serialNext() { + // TODO(soergel): consider performance + if (!this.upstreamExhausted) { + this.refill(); + } + while (!this.buffer.isEmpty()) { + const chosenIndex = this.chooseIndex(); + const result = await this.buffer.shuffleExcise(chosenIndex); + if (result.done) { + this.upstreamExhausted = true; + } + else { + this.refill(); + return result; + } + } + return { value: null, done: true }; + } +} +//# sourceMappingURL=lazy_iterator.js.map + +/***/ }), +/* 15 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* WEBPACK VAR INJECTION */(function(Buffer) {/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return OperationMapper; }); +/* unused harmony export decodeBase64 */ +/* unused harmony export parseStringParam */ +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "i", function() { return getStringParam; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "c", function() { return getBoolParam; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "f", function() { return getNumberParam; }); +/* unused harmony export parseDtypeParam */ +/* unused harmony export getFuncParam */ +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "e", function() { return getDtypeParam; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "d", function() { return getDtypeArrayParam; }); +/* unused harmony export parseTensorShapeParam */ +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "k", function() { return getTensorShapeParam; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "g", function() { return getNumericArrayParam; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "h", function() { return getStringArrayParam; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "j", function() { return getTensorShapeArrayParam; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return getBoolArrayParam; }); +/* harmony import */ var _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(0); +/* harmony import */ var _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(21); +/* harmony import */ var _custom_op_register__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(24); +/* harmony import */ var _executors_utils__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(2); +/* harmony import */ var _op_list_arithmetic__WEBPACK_IMPORTED_MODULE_4__ = __webpack_require__(41); +/* harmony import */ var _op_list_basic_math__WEBPACK_IMPORTED_MODULE_5__ = __webpack_require__(42); +/* harmony import */ var _op_list_control__WEBPACK_IMPORTED_MODULE_6__ = __webpack_require__(43); +/* harmony import */ var _op_list_convolution__WEBPACK_IMPORTED_MODULE_7__ = __webpack_require__(44); +/* harmony import */ var _op_list_creation__WEBPACK_IMPORTED_MODULE_8__ = __webpack_require__(45); +/* harmony import */ var _op_list_dynamic__WEBPACK_IMPORTED_MODULE_9__ = __webpack_require__(46); +/* harmony import */ var _op_list_evaluation__WEBPACK_IMPORTED_MODULE_10__ = __webpack_require__(47); +/* harmony import */ var _op_list_graph__WEBPACK_IMPORTED_MODULE_11__ = __webpack_require__(48); +/* harmony import */ var _op_list_image__WEBPACK_IMPORTED_MODULE_12__ = __webpack_require__(49); +/* harmony import */ var _op_list_logical__WEBPACK_IMPORTED_MODULE_13__ = __webpack_require__(50); +/* harmony import */ var _op_list_matrices__WEBPACK_IMPORTED_MODULE_14__ = __webpack_require__(51); +/* harmony import */ var _op_list_normalization__WEBPACK_IMPORTED_MODULE_15__ = __webpack_require__(52); +/* harmony import */ var _op_list_reduction__WEBPACK_IMPORTED_MODULE_16__ = __webpack_require__(53); +/* harmony import */ var _op_list_slice_join__WEBPACK_IMPORTED_MODULE_17__ = __webpack_require__(54); +/* harmony import */ var _op_list_spectral__WEBPACK_IMPORTED_MODULE_18__ = __webpack_require__(55); +/* harmony import */ var _op_list_transformation__WEBPACK_IMPORTED_MODULE_19__ = __webpack_require__(56); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + + + + + + + + + + + + + + + + + + +class OperationMapper { + // Singleton instance for the mapper + static get Instance() { + return this._instance || (this._instance = new this()); + } + // Loads the op mapping from the JSON file. + constructor() { + const ops = [ + _op_list_arithmetic__WEBPACK_IMPORTED_MODULE_4__, _op_list_basic_math__WEBPACK_IMPORTED_MODULE_5__, _op_list_control__WEBPACK_IMPORTED_MODULE_6__, _op_list_convolution__WEBPACK_IMPORTED_MODULE_7__, _op_list_creation__WEBPACK_IMPORTED_MODULE_8__, _op_list_dynamic__WEBPACK_IMPORTED_MODULE_9__, + _op_list_evaluation__WEBPACK_IMPORTED_MODULE_10__, _op_list_logical__WEBPACK_IMPORTED_MODULE_13__, _op_list_image__WEBPACK_IMPORTED_MODULE_12__, _op_list_graph__WEBPACK_IMPORTED_MODULE_11__, _op_list_matrices__WEBPACK_IMPORTED_MODULE_14__, _op_list_normalization__WEBPACK_IMPORTED_MODULE_15__, _op_list_reduction__WEBPACK_IMPORTED_MODULE_16__, + _op_list_slice_join__WEBPACK_IMPORTED_MODULE_17__, _op_list_spectral__WEBPACK_IMPORTED_MODULE_18__, _op_list_transformation__WEBPACK_IMPORTED_MODULE_19__ + ]; + const mappersJson = [].concat(...ops.map(op => op.json)); + this.opMappers = mappersJson.reduce((map, mapper) => { + map[mapper.tfOpName] = mapper; + return map; + }, {}); + } + // Converts the model from Tensorflow GraphDef to local representation for + // TensorFlow.js API + transformGraph(graph, signature = {}) { + const tfNodes = graph.node; + const placeholders = []; + const weights = []; + const nodes = tfNodes.reduce((map, node) => { + map[node.name] = this.mapNode(node); + if (node.op.startsWith('Placeholder')) { + placeholders.push(map[node.name]); + } + if (node.op === 'Const') { + weights.push(map[node.name]); + } + return map; + }, {}); + let inputs = []; + const outputs = []; + let inputNodeNameToKey = {}; + let outputNodeNameToKey = {}; + if (signature != null) { + inputNodeNameToKey = this.mapSignatureEntries(signature.inputs); + outputNodeNameToKey = this.mapSignatureEntries(signature.outputs); + } + const allNodes = Object.keys(nodes); + allNodes.forEach(key => { + const node = nodes[key]; + node.inputNames.forEach(name => { + const [nodeName,] = Object(_executors_utils__WEBPACK_IMPORTED_MODULE_3__[/* getNodeNameAndIndex */ "a"])(name); + node.inputs.push(nodes[nodeName]); + nodes[nodeName].children.push(node); + }); + }); + // if signature has not outputs set, add any node that does not have + // outputs. + if (Object.keys(outputNodeNameToKey).length === 0) { + allNodes.forEach(key => { + const node = nodes[key]; + if (node.children.length === 0) { + outputs.push(node); + } + }); + } + else { + Object.keys(outputNodeNameToKey).forEach(name => { + const [nodeName,] = Object(_executors_utils__WEBPACK_IMPORTED_MODULE_3__[/* getNodeNameAndIndex */ "a"])(name); + const node = nodes[nodeName]; + if (node != null) { + node.signatureKey = outputNodeNameToKey[name]; + outputs.push(node); + } + }); + } + if (Object.keys(inputNodeNameToKey).length > 0) { + Object.keys(inputNodeNameToKey).forEach(name => { + const [nodeName,] = Object(_executors_utils__WEBPACK_IMPORTED_MODULE_3__[/* getNodeNameAndIndex */ "a"])(name); + const node = nodes[nodeName]; + if (node) { + node.signatureKey = inputNodeNameToKey[name]; + inputs.push(node); + } + }); + } + else { + inputs = placeholders; + } + let functions = {}; + if (graph.library != null && graph.library.function != null) { + functions = graph.library.function.reduce((functions, func) => { + functions[func.signature.name] = this.mapFunction(func); + return functions; + }, {}); + } + return { + nodes, + inputs, + outputs, + weights, + placeholders, + signature, + functions + }; + } + mapSignatureEntries(entries) { + return Object.keys(entries || {}) + .reduce((prev, curr) => { + prev[entries[curr].name] = curr; + return prev; + }, {}); + } + mapNode(node) { + // Unsupported ops will cause an error at run-time (not parse time), since + // they may not be used by the actual execution subgraph. + const mapper = Object(_custom_op_register__WEBPACK_IMPORTED_MODULE_2__[/* getRegisteredOp */ "b"])(node.op) || this.opMappers[node.op] || {}; + if (node.attr == null) { + node.attr = {}; + } + const newNode = { + name: node.name, + op: node.op, + category: mapper.category, + inputNames: (node.input || + []).map(input => input.startsWith('^') ? input.substr(1) : input), + inputs: [], + children: [], + inputParams: {}, + attrParams: {}, + rawAttrs: node.attr + }; + if (mapper.inputs != null) { + newNode.inputParams = + mapper.inputs.reduce((map, param) => { + map[param.name] = { + type: param.type, + inputIndexStart: param.start, + inputIndexEnd: param.end + }; + return map; + }, {}); + } + if (mapper.attrs != null) { + newNode.attrParams = + mapper.attrs.reduce((map, param) => { + const type = param.type; + let value = undefined; + switch (param.type) { + case 'string': + value = getStringParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getStringParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'string[]': + value = getStringArrayParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getStringArrayParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'number': + value = getNumberParam(node.attr, param.tfName, (param.defaultValue || 0)); + if (value === undefined && !!param.tfDeprecatedName) { + value = getNumberParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'number[]': + value = getNumericArrayParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getNumericArrayParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'bool': + value = getBoolParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getBoolParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'bool[]': + value = getBoolArrayParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getBoolArrayParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'shape': + value = getTensorShapeParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getTensorShapeParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'shape[]': + value = getTensorShapeArrayParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getTensorShapeArrayParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'dtype': + value = getDtypeParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getDtypeParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'dtype[]': + value = getDtypeArrayParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getDtypeArrayParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'func': + value = getFuncParam(node.attr, param.tfName, param.defaultValue); + if (value === undefined && !!param.tfDeprecatedName) { + value = getFuncParam(node.attr, param.tfDeprecatedName, param.defaultValue); + } + break; + case 'tensor': + case 'tensors': + break; + default: + throw new Error(`Unsupported param type: ${param.type} for op: ${node.op}`); + } + map[param.name] = { value, type }; + return map; + }, {}); + } + return newNode; + } + // map the TFunctionDef to TFJS graph object + mapFunction(functionDef) { + const tfNodes = functionDef.nodeDef; + const placeholders = []; + const weights = []; + let nodes = {}; + if (tfNodes != null) { + nodes = tfNodes.reduce((map, node) => { + map[node.name] = this.mapNode(node); + if (node.op === 'Const') { + weights.push(map[node.name]); + } + return map; + }, {}); + } + const inputs = []; + const outputs = []; + functionDef.signature.inputArg.forEach(arg => { + const [nodeName,] = Object(_executors_utils__WEBPACK_IMPORTED_MODULE_3__[/* getNodeNameAndIndex */ "a"])(arg.name); + const node = { + name: nodeName, + op: 'Placeholder', + inputs: [], + inputNames: [], + category: 'graph', + inputParams: {}, + attrParams: { dtype: { value: parseDtypeParam(arg.type), type: 'dtype' } }, + children: [] + }; + node.signatureKey = arg.name; + inputs.push(node); + nodes[nodeName] = node; + }); + const allNodes = Object.keys(nodes); + allNodes.forEach(key => { + const node = nodes[key]; + node.inputNames.forEach(name => { + const [nodeName,] = Object(_executors_utils__WEBPACK_IMPORTED_MODULE_3__[/* getNodeNameAndIndex */ "a"])(name); + node.inputs.push(nodes[nodeName]); + nodes[nodeName].children.push(node); + }); + }); + const returnNodeMap = functionDef.ret; + functionDef.signature.outputArg.forEach(output => { + const [nodeName, index] = Object(_executors_utils__WEBPACK_IMPORTED_MODULE_3__[/* getNodeNameAndIndex */ "a"])(returnNodeMap[output.name]); + const node = nodes[nodeName]; + if (node != null) { + node.defaultOutput = index; + outputs.push(node); + } + }); + const signature = this.mapArgsToSignature(functionDef); + return { nodes, inputs, outputs, weights, placeholders, signature }; + } + mapArgsToSignature(functionDef) { + return { + methodName: functionDef.signature.name, + inputs: functionDef.signature.inputArg.reduce((map, arg) => { + map[arg.name] = this.mapArgToTensorInfo(arg); + return map; + }, {}), + outputs: functionDef.signature.outputArg.reduce((map, arg) => { + map[arg.name] = this.mapArgToTensorInfo(arg, functionDef.ret); + return map; + }, {}), + }; + } + mapArgToTensorInfo(arg, nameMap) { + let name = arg.name; + if (nameMap != null) { + name = nameMap[name]; + } + return { name, dtype: arg.type }; + } +} +function decodeBase64(text) { + const global = Object(_tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["env"])().global; + if (typeof global.atob !== 'undefined') { + return global.atob(text); + } + else if (typeof Buffer !== 'undefined') { + return new Buffer(text, 'base64').toString(); + } + else { + throw new Error('Unable to decode base64 in this environment. ' + + 'Missing built-in atob() or Buffer()'); + } +} +function parseStringParam(s, keepCase) { + const value = Array.isArray(s) ? String.fromCharCode.apply(null, s) : decodeBase64(s); + return keepCase ? value : value.toLowerCase(); +} +function getStringParam(attrs, name, def, keepCase = false) { + const param = attrs[name]; + if (param != null) { + return parseStringParam(param.s, keepCase); + } + return def; +} +function getBoolParam(attrs, name, def) { + const param = attrs[name]; + return param ? param.b : def; +} +function getNumberParam(attrs, name, def) { + const param = attrs[name] || {}; + const value = param['i'] != null ? param['i'] : (param['f'] != null ? param['f'] : def); + return (typeof value === 'number') ? value : parseInt(value, 10); +} +function parseDtypeParam(value) { + if (typeof (value) === 'string') { + // tslint:disable-next-line:no-any + value = _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"][value]; + } + switch (value) { + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_FLOAT: + return 'float32'; + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_INT32: + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_INT64: + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_INT8: + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_UINT8: + return 'int32'; + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_BOOL: + return 'bool'; + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_DOUBLE: + return 'float32'; + case _data_compiled_api__WEBPACK_IMPORTED_MODULE_1__[/* DataType */ "a"].DT_STRING: + return 'string'; + default: + // Unknown dtype error will happen at runtime (instead of parse time), + // since these nodes might not be used by the actual subgraph execution. + return null; + } +} +function getFuncParam(attrs, name, def) { + const param = attrs[name]; + if (param && param.func) { + return param.func.name; + } + return def; +} +function getDtypeParam(attrs, name, def) { + const param = attrs[name]; + if (param && param.type) { + return parseDtypeParam(param.type); + } + return def; +} +function getDtypeArrayParam(attrs, name, def) { + const param = attrs[name]; + if (param && param.list && param.list.type) { + return param.list.type.map(v => parseDtypeParam(v)); + } + return def; +} +function parseTensorShapeParam(shape) { + if (shape.unknownRank) { + return undefined; + } + if (shape.dim != null) { + return shape.dim.map(dim => (typeof dim.size === 'number') ? dim.size : parseInt(dim.size, 10)); + } + return []; +} +function getTensorShapeParam(attrs, name, def) { + const param = attrs[name]; + if (param && param.shape) { + return parseTensorShapeParam(param.shape); + } + return def; +} +function getNumericArrayParam(attrs, name, def) { + const param = attrs[name]; + if (param) { + return ((param.list.f && param.list.f.length ? param.list.f : + param.list.i) || + []) + .map(v => (typeof v === 'number') ? v : parseInt(v, 10)); + } + return def; +} +function getStringArrayParam(attrs, name, def, keepCase = false) { + const param = attrs[name]; + if (param && param.list && param.list.s) { + return param.list.s.map((v) => { + return parseStringParam(v, keepCase); + }); + } + return def; +} +function getTensorShapeArrayParam(attrs, name, def) { + const param = attrs[name]; + if (param && param.list && param.list.shape) { + return param.list.shape.map((v) => { + return parseTensorShapeParam(v); + }); + } + return def; +} +function getBoolArrayParam(attrs, name, def) { + const param = attrs[name]; + if (param && param.list && param.list.b) { + return param.list.b; + } + return def; +} +//# sourceMappingURL=operation_mapper.js.map +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(39).Buffer)) + +/***/ }), +/* 16 */ +/***/ (function(module, exports) { + +module.exports = function() { + throw new Error("define cannot be used indirect"); +}; + + +/***/ }), +/* 17 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return getKernel; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return getGradient; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "c", function() { return getKernelsForBackend; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "e", function() { return registerKernel; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "d", function() { return registerGradient; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "g", function() { return unregisterKernel; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "f", function() { return unregisterGradient; }); +/* harmony import */ var _global_util__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(33); +/** + * @license + * Copyright 2019 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + +const kernelRegistry = Object(_global_util__WEBPACK_IMPORTED_MODULE_0__[/* getGlobal */ "a"])('kernelRegistry', () => new Map()); +const gradRegistry = Object(_global_util__WEBPACK_IMPORTED_MODULE_0__[/* getGlobal */ "a"])('gradRegistry', () => new Map()); +/** + * Returns the kernel function (code) associated with the provided names. + * + * @param kernelName The official name of the kernel. + * @param backendName The official name of the backend. + */ +function getKernel(kernelName, backendName) { + const key = makeKey(kernelName, backendName); + return kernelRegistry.get(key); +} +/** + * Returns the registered gradient info associated with the provided kernel. + * @param kernelName The official TF kernel name. + */ +function getGradient(kernelName) { + return gradRegistry.get(kernelName); +} +function getKernelsForBackend(backendName) { + const it = kernelRegistry.entries(); + const result = []; + while (true) { + const { done, value } = it.next(); + if (done) { + break; + } + const [key, config] = value; + const [backend,] = key.split('_'); + if (backend === backendName) { + result.push(config); + } + } + return result; +} +/** + * Registers the function (forward pass) for the kernel in a global registry. + * + * @param config A config object with the following properties: + * - `kernelName` The official name of the kernel. + * - `backendName` The official name of the backend. + * - `kernelFunc` The function to run during the forward pass of the kernel. + * - `setupFunc` Optional. Gets called once, after the backend initializes. + * - `disposeFunc` Optional. Gets called once, right before the backend is + * disposed. + */ +function registerKernel(config) { + const { kernelName, backendName } = config; + const key = makeKey(kernelName, backendName); + if (kernelRegistry.has(key)) { + console.warn(`The kernel '${kernelName}' for backend ` + + `'${backendName}' is already registered`); + } + kernelRegistry.set(key, config); +} +/** + * Registers a gradient function for a given kernel in the global registry, + * to be used during the back-propagation of that kernel. + * + * @param config An object with the following properties: + * - `kernelName` The name of the kernel that the gradient function is for. + * - `gradFunc` The function to run during back-propagation. + */ +function registerGradient(config) { + const { kernelName } = config; + if (gradRegistry.has(kernelName)) { + console.warn(`Overriding the gradient for '${kernelName}'`); + } + gradRegistry.set(kernelName, config); +} +/** + * Removes the kernel function from the registry. + * + * @param kernelName The official name of the kernel. + * @param backendName The official name of the backend. + * + */ +function unregisterKernel(kernelName, backendName) { + const key = makeKey(kernelName, backendName); + if (!kernelRegistry.has(key)) { + throw new Error(`The kernel '${kernelName}' for backend ` + + `'${backendName}' is not registered`); + } + kernelRegistry.delete(key); +} +/** Removes the registered gradient from the global registry. */ +function unregisterGradient(kernelName) { + if (!gradRegistry.has(kernelName)) { + throw new Error(`The gradient '${kernelName}' for backend is not registered`); + } + gradRegistry.delete(kernelName); +} +function makeKey(kernelName, backendName) { + return `${backendName}_${kernelName}`; +} +//# sourceMappingURL=kernel_registry.js.map + +/***/ }), +/* 18 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return complex; }); +/* harmony import */ var _engine__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(5); +/* harmony import */ var _kernel_names__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(6); +/* harmony import */ var _tensor_util_env__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(3); +/* harmony import */ var _util__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(1); +/* harmony import */ var _operation__WEBPACK_IMPORTED_MODULE_4__ = __webpack_require__(4); +/** + * @license + * Copyright 2020 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + + + +/** + * Converts two real numbers to a complex number. + * + * Given a tensor `real` representing the real part of a complex number, and a + * tensor `imag` representing the imaginary part of a complex number, this + * operation returns complex numbers elementwise of the form [r0, i0, r1, i1], + * where r represents the real part and i represents the imag part. + * + * The input tensors real and imag must have the same shape. + * + * ```js + * const real = tf.tensor1d([2.25, 3.25]); + * const imag = tf.tensor1d([4.75, 5.75]); + * const complex = tf.complex(real, imag); + * + * complex.print(); + * ``` + */ +/** @doc {heading: 'Tensors', subheading: 'Creation'} */ +function complex_(real, imag) { + const $real = Object(_tensor_util_env__WEBPACK_IMPORTED_MODULE_2__[/* convertToTensor */ "a"])(real, 'real', 'complex'); + const $imag = Object(_tensor_util_env__WEBPACK_IMPORTED_MODULE_2__[/* convertToTensor */ "a"])(imag, 'imag', 'complex'); + _util__WEBPACK_IMPORTED_MODULE_3__["assertShapesMatch"]($real.shape, $imag.shape, `real and imag shapes, ${$real.shape} and ${$imag.shape}, ` + + `must match in call to tf.complex().`); + const forward = (backend) => { + return backend.complex($real, $imag); + }; + const inputs = { real: $real, imag: $imag }; + return _engine__WEBPACK_IMPORTED_MODULE_0__[/* ENGINE */ "a"].runKernelFunc(forward, inputs, null /* gradient */, _kernel_names__WEBPACK_IMPORTED_MODULE_1__[/* Complex */ "k"]); +} +const complex = Object(_operation__WEBPACK_IMPORTED_MODULE_4__[/* op */ "a"])({ complex_ }); +//# sourceMappingURL=complex.js.map + +/***/ }), +/* 19 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return deepMap; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "d", function() { return deepZip; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "f", function() { return zipToList; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "c", function() { return deepMapAndAwaitAll; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "e", function() { return isIterable; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return canTensorify; }); +/* harmony import */ var _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(0); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ + +/** + * Apply a mapping function to a nested structure in a recursive manner. + * + * The result of the mapping is an object with the same nested structure (i.e., + * of arrays and dicts) as the input, except that some subtrees are replaced, + * according to the results of the mapping function. + * + * Mappings are memoized. Thus, if the nested structure contains the same + * object in multiple positions, the output will contain the same mapped object + * in those positions. Cycles are not supported, however. + * + * @param input: The object to which to apply the mapping function. + * @param mapFn: A function that expects a single node of the object tree, and + * returns a `DeepMapResult`. The `DeepMapResult` either provides a + * replacement value for that node (i.e., replacing the subtree), or indicates + * that the node should be processed recursively. + */ +function deepMap(input, mapFn) { + return deepMapInternal(input, mapFn); +} +/** + * @param seen: A Map of known object mappings (i.e., memoized results of + * `mapFn()`) + * @param containedIn: An set containing objects on the reference path currently + * being processed (used to detect cycles). + */ +function deepMapInternal(input, mapFn, seen = new Map(), containedIn = new Set()) { + if (input == null) { + return null; + } + if (containedIn.has(input)) { + throw new Error('Circular references are not supported.'); + } + if (seen.has(input)) { + return seen.get(input); + } + const result = mapFn(input); + if (result.recurse && result.value !== null) { + throw new Error('A deep map function may not return both a value and recurse=true.'); + } + if (!result.recurse) { + seen.set(input, result.value); + return result.value; + } + else if (isIterable(input)) { + // tslint:disable-next-line:no-any + const mappedIterable = Array.isArray(input) ? [] : {}; + containedIn.add(input); + for (const k in input) { + const child = input[k]; + const childResult = deepMapInternal(child, mapFn, seen, containedIn); + mappedIterable[k] = childResult; + } + containedIn.delete(input); + return mappedIterable; + } + else { + throw new Error(`Can't recurse into non-iterable type: ${input}`); + } +} +// TODO(soergel, kangyizhang) Reconsider naming of deepZip() to avoid confusion +// with zip() +/** + * Zip nested structures together in a recursive manner. + * + * This has the effect of transposing or pivoting data, e.g. converting it from + * a row-major representation to a column-major representation. + * + * For example, `deepZip([{a: 1, b: 2}, {a: 3, b: 4}])` returns + * `{a: [1, 3], b: [2, 4]}`. + * + * The inputs should all have the same nested structure (i.e., of arrays and + * dicts). The result is a single object with the same nested structure, where + * the leaves are arrays collecting the values of the inputs at that location + * (or, optionally, the result of a custom function applied to those arrays). + * + * @param inputs: An array of the objects to zip together. + * @param zipFn: (optional) A function that expects an array of elements at a + * single node of the object tree, and returns a `DeepMapResult`. The + * `DeepMapResult` either provides a result value for that node (i.e., + * representing the subtree), or indicates that the node should be processed + * recursively. The default zipFn recurses as far as possible and places + * arrays at the leaves. + */ +function deepZip(inputs, zipFn = zipToList) { + return deepZipInternal(inputs, zipFn); +} +/** + * @param containedIn: An set containing objects on the reference path currently + * being processed (used to detect cycles). + */ +function deepZipInternal(inputs, zipFn, containedIn = new Set()) { + // The recursion follows the structure of input 0; it's assumed that all the + // other inputs have the same structure. + const input = inputs[0]; + if (containedIn.has(input)) { + throw new Error('Circular references are not supported.'); + } + const result = zipFn(inputs); + if (result.recurse && result.value !== null) { + throw new Error('A deep zip function may not return both a value and recurse=true.'); + } + if (!result.recurse) { + return result.value; + } + else if (isIterable(input)) { + // tslint:disable-next-line:no-any + const mappedIterable = Array.isArray(input) ? [] : {}; + containedIn.add(input); + for (const k in input) { + const children = inputs.map(x => x[k]); + const childResult = deepZipInternal(children, zipFn, containedIn); + mappedIterable[k] = childResult; + } + containedIn.delete(input); + return mappedIterable; + } + else { + throw new Error(`Can't recurse into non-iterable type: ${input}`); + } +} +// tslint:disable-next-line:no-any +function zipToList(x) { + if (x === null) { + return null; + } + // TODO(soergel): validate array type? + if (isIterable(x[0])) { + return { value: null, recurse: true }; + } + else { + return { value: x, recurse: false }; + } +} +/** + * Apply an async mapping function to a nested structure in a recursive manner. + * + * This first creates a nested structure of Promises, and then awaits all of + * those, resulting in a single Promise for a resolved nested structure. + * + * The result of the mapping is an object with the same nested structure (i.e., + * of arrays and dicts) as the input, except that some subtrees are replaced, + * according to the results of the mapping function. + * + * Mappings are memoized. Thus, if the nested structure contains the same + * object in multiple positions, the output will contain the same mapped object + * in those positions. Cycles are not supported, however. + * + * @param input: The object to which to apply the mapping function. + * @param mapFn: A function that expects a single node of the object tree, and + * returns a `DeepMapAsyncResult`. The `DeepMapAsyncResult` either provides + * a `Promise` for a replacement value for that node (i.e., replacing the + * subtree), or indicates that the node should be processed recursively. Note + * that the decision whether or not to recurse must be made immediately; only + * the mapped value may be promised. + */ +async function deepMapAndAwaitAll(input, mapFn) { + const seen = new Map(); + // First do a normal deepMap, collecting Promises in 'seen' as a side effect. + deepMapInternal(input, mapFn, seen); + // Replace the Promises in 'seen' in place. + // Note TypeScript provides no async map iteration, and regular map iteration + // is broken too, so sadly we have to do Array.from() to make it work. + // (There's no advantage to Promise.all(), and that would be tricky anyway.) + for (const key of Array.from(seen.keys())) { + const value = seen.get(key); + if (value instanceof Promise) { + const mappedValue = await value; + seen.set(key, mappedValue); + } + } + // Normal deepMap again, this time filling in the resolved values. + // It's unfortunate that we have to do two passes. + // TODO(soergel): test performance and think harder about a fast solution. + const result = deepMapInternal(input, mapFn, seen); + return result; +} +/** + * Determine whether the argument is iterable. + * + * @returns true if the argument is an array or any non-Tensor object. + */ +// tslint:disable-next-line:no-any +function isIterable(obj) { + return obj != null && (!ArrayBuffer.isView(obj)) && + (Array.isArray(obj) || + (typeof obj === 'object' && !(obj instanceof _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["Tensor"]))); +} +/** + * Determine whether the argument can be converted to Tensor. + * + * Tensors, primitives, arrays, and TypedArrays all qualify; anything else does + * not. + * + * @returns true if the argument can be converted to Tensor. + */ +// tslint:disable-next-line:no-any +function canTensorify(obj) { + return obj == null || isPrimitive(obj) || Array.isArray(obj) || + (typeof obj === 'object' && (obj instanceof _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["Tensor"])) || + _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].isTypedArray(obj); +} +/** + * Returns true if the given `value` is a primitive type. Otherwise returns + * false. This is equivalant to node util.isPrimitive + */ +function isPrimitive(value) { + return (value === null || + (typeof value !== 'object' && typeof value !== 'function')); +} +//# sourceMappingURL=deep_map.js.map + +/***/ }), +/* 20 */ +/***/ (function(module, exports, __webpack_require__) { + +// A library of seedable RNGs implemented in Javascript. +// +// Usage: +// +// var seedrandom = require('seedrandom'); +// var random = seedrandom(1); // or any seed. +// var x = random(); // 0 <= x < 1. Every bit is random. +// var x = random.quick(); // 0 <= x < 1. 32 bits of randomness. + +// alea, a 53-bit multiply-with-carry generator by Johannes Baagøe. +// Period: ~2^116 +// Reported to pass all BigCrush tests. +var alea = __webpack_require__(68); + +// xor128, a pure xor-shift generator by George Marsaglia. +// Period: 2^128-1. +// Reported to fail: MatrixRank and LinearComp. +var xor128 = __webpack_require__(69); + +// xorwow, George Marsaglia's 160-bit xor-shift combined plus weyl. +// Period: 2^192-2^32 +// Reported to fail: CollisionOver, SimpPoker, and LinearComp. +var xorwow = __webpack_require__(70); + +// xorshift7, by François Panneton and Pierre L'ecuyer, takes +// a different approach: it adds robustness by allowing more shifts +// than Marsaglia's original three. It is a 7-shift generator +// with 256 bits, that passes BigCrush with no systmatic failures. +// Period 2^256-1. +// No systematic BigCrush failures reported. +var xorshift7 = __webpack_require__(71); + +// xor4096, by Richard Brent, is a 4096-bit xor-shift with a +// very long period that also adds a Weyl generator. It also passes +// BigCrush with no systematic failures. Its long period may +// be useful if you have many generators and need to avoid +// collisions. +// Period: 2^4128-2^32. +// No systematic BigCrush failures reported. +var xor4096 = __webpack_require__(72); + +// Tyche-i, by Samuel Neves and Filipe Araujo, is a bit-shifting random +// number generator derived from ChaCha, a modern stream cipher. +// https://eden.dei.uc.pt/~sneves/pubs/2011-snfa2.pdf +// Period: ~2^127 +// No systematic BigCrush failures reported. +var tychei = __webpack_require__(73); + +// The original ARC4-based prng included in this library. +// Period: ~2^1600 +var sr = __webpack_require__(74); + +sr.alea = alea; +sr.xor128 = xor128; +sr.xorwow = xorwow; +sr.xorshift7 = xorshift7; +sr.xor4096 = xor4096; +sr.tychei = tychei; + +module.exports = sr; + + +/***/ }), +/* 21 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return DataType; }); +/* unused harmony export SaverDef */ +/** + * @license + * Copyright 2019 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ +/** DataType enum. */ +var DataType; +(function (DataType) { + DataType[DataType["DT_INVALID"] = 0] = "DT_INVALID"; + DataType[DataType["DT_FLOAT"] = 1] = "DT_FLOAT"; + DataType[DataType["DT_DOUBLE"] = 2] = "DT_DOUBLE"; + DataType[DataType["DT_INT32"] = 3] = "DT_INT32"; + DataType[DataType["DT_UINT8"] = 4] = "DT_UINT8"; + DataType[DataType["DT_INT16"] = 5] = "DT_INT16"; + DataType[DataType["DT_INT8"] = 6] = "DT_INT8"; + DataType[DataType["DT_STRING"] = 7] = "DT_STRING"; + DataType[DataType["DT_COMPLEX64"] = 8] = "DT_COMPLEX64"; + DataType[DataType["DT_INT64"] = 9] = "DT_INT64"; + DataType[DataType["DT_BOOL"] = 10] = "DT_BOOL"; + DataType[DataType["DT_QINT8"] = 11] = "DT_QINT8"; + DataType[DataType["DT_QUINT8"] = 12] = "DT_QUINT8"; + DataType[DataType["DT_QINT32"] = 13] = "DT_QINT32"; + DataType[DataType["DT_BFLOAT16"] = 14] = "DT_BFLOAT16"; + DataType[DataType["DT_FLOAT_REF"] = 101] = "DT_FLOAT_REF"; + DataType[DataType["DT_DOUBLE_REF"] = 102] = "DT_DOUBLE_REF"; + DataType[DataType["DT_INT32_REF"] = 103] = "DT_INT32_REF"; + DataType[DataType["DT_UINT8_REF"] = 104] = "DT_UINT8_REF"; + DataType[DataType["DT_INT16_REF"] = 105] = "DT_INT16_REF"; + DataType[DataType["DT_INT8_REF"] = 106] = "DT_INT8_REF"; + DataType[DataType["DT_STRING_REF"] = 107] = "DT_STRING_REF"; + DataType[DataType["DT_COMPLEX64_REF"] = 108] = "DT_COMPLEX64_REF"; + DataType[DataType["DT_INT64_REF"] = 109] = "DT_INT64_REF"; + DataType[DataType["DT_BOOL_REF"] = 110] = "DT_BOOL_REF"; + DataType[DataType["DT_QINT8_REF"] = 111] = "DT_QINT8_REF"; + DataType[DataType["DT_QUINT8_REF"] = 112] = "DT_QUINT8_REF"; + DataType[DataType["DT_QINT32_REF"] = 113] = "DT_QINT32_REF"; + DataType[DataType["DT_BFLOAT16_REF"] = 114] = "DT_BFLOAT16_REF"; +})(DataType || (DataType = {})); +var SaverDef; +(function (SaverDef) { + /** CheckpointFormatVersion enum. */ + let CheckpointFormatVersion; + (function (CheckpointFormatVersion) { + CheckpointFormatVersion[CheckpointFormatVersion["LEGACY"] = 0] = "LEGACY"; + CheckpointFormatVersion[CheckpointFormatVersion["V1"] = 1] = "V1"; + CheckpointFormatVersion[CheckpointFormatVersion["V2"] = 2] = "V2"; + })(CheckpointFormatVersion = SaverDef.CheckpointFormatVersion || (SaverDef.CheckpointFormatVersion = {})); +})(SaverDef || (SaverDef = {})); +//# sourceMappingURL=compiled_api.js.map + +/***/ }), +/* 22 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return real; }); +/* harmony import */ var _engine__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(5); +/* harmony import */ var _kernel_names__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(6); +/* harmony import */ var _tensor_util_env__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(3); +/* harmony import */ var _operation__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(4); +/** + * @license + * Copyright 2020 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + + +/** + * Returns the real part of a complex (or real) tensor. + * + * Given a tensor input, this operation returns a tensor of type float that is + * the real part of each element in input considered as a complex number. + * + * If the input is real, it simply makes a clone. + * + * ```js + * const x = tf.complex([-2.25, 3.25], [4.75, 5.75]); + * tf.real(x).print(); + * ``` + */ +/** @doc {heading: 'Tensors', subheading: 'Creation'} */ +function real_(input) { + const $input = Object(_tensor_util_env__WEBPACK_IMPORTED_MODULE_2__[/* convertToTensor */ "a"])(input, 'input', 'real'); + const forward = (backend) => { + return backend.real($input); + }; + const inputs = { input: $input }; + return _engine__WEBPACK_IMPORTED_MODULE_0__[/* ENGINE */ "a"].runKernelFunc(forward, inputs, null /* gradient */, _kernel_names__WEBPACK_IMPORTED_MODULE_1__[/* Real */ "hb"]); +} +const real = Object(_operation__WEBPACK_IMPORTED_MODULE_3__[/* op */ "a"])({ real_ }); +//# sourceMappingURL=real.js.map + +/***/ }), +/* 23 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return Rank; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "c", function() { return upcastType; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return sumOutType; }); +/** + * @license + * Copyright 2017 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +var Rank; +(function (Rank) { + Rank["R0"] = "R0"; + Rank["R1"] = "R1"; + Rank["R2"] = "R2"; + Rank["R3"] = "R3"; + Rank["R4"] = "R4"; + Rank["R5"] = "R5"; + Rank["R6"] = "R6"; +})(Rank || (Rank = {})); +// Looks for upcasting types. Used, for example, in operations with mixed dtype +// inputs. +var UpcastInt32AndMap; +(function (UpcastInt32AndMap) { + UpcastInt32AndMap["float32"] = "float32"; + UpcastInt32AndMap["int32"] = "int32"; + UpcastInt32AndMap["bool"] = "int32"; + UpcastInt32AndMap["complex64"] = "complex64"; +})(UpcastInt32AndMap || (UpcastInt32AndMap = {})); +var UpcastBoolAndMap; +(function (UpcastBoolAndMap) { + UpcastBoolAndMap["float32"] = "float32"; + UpcastBoolAndMap["int32"] = "int32"; + UpcastBoolAndMap["bool"] = "bool"; + UpcastBoolAndMap["complex64"] = "complex64"; +})(UpcastBoolAndMap || (UpcastBoolAndMap = {})); +var UpcastFloat32AndMap; +(function (UpcastFloat32AndMap) { + UpcastFloat32AndMap["float32"] = "float32"; + UpcastFloat32AndMap["int32"] = "float32"; + UpcastFloat32AndMap["bool"] = "float32"; + UpcastFloat32AndMap["complex64"] = "complex64"; +})(UpcastFloat32AndMap || (UpcastFloat32AndMap = {})); +var UpcastComplex64AndMap; +(function (UpcastComplex64AndMap) { + UpcastComplex64AndMap["float32"] = "complex64"; + UpcastComplex64AndMap["int32"] = "complex64"; + UpcastComplex64AndMap["bool"] = "complex64"; + UpcastComplex64AndMap["complex64"] = "complex64"; +})(UpcastComplex64AndMap || (UpcastComplex64AndMap = {})); +const upcastTypeMap = { + 'float32': UpcastFloat32AndMap, + 'int32': UpcastInt32AndMap, + 'bool': UpcastBoolAndMap, + 'complex64': UpcastComplex64AndMap +}; +function upcastType(typeA, typeB) { + if (typeA === 'string' || typeB === 'string') { + if (typeA === 'string' && typeB === 'string') { + return 'string'; + } + throw new Error(`Can not upcast ${typeA} with ${typeB}`); + } + return upcastTypeMap[typeA][typeB]; +} +/** Returns the output type after summation. */ +function sumOutType(type) { + return upcastType(type, 'int32'); +} +//# sourceMappingURL=types.js.map + +/***/ }), +/* 24 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "c", function() { return registerOp; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return getRegisteredOp; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return deregisterOp; }); +/** + * @license + * Copyright 2019 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const CUSTOM_OPS = {}; +/** + * Register an Op for graph model executor. This allow you to register + * TensorFlow custom op or override existing op. + * + * Here is an example of registering a new MatMul Op. + * ```js + * const customMatmul = (node) => + * tf.matMul( + * node.inputs[0], node.inputs[1], + * node.attrs['transpose_a'], node.attrs['transpose_b']); + * + * tf.registerOp('MatMul', customMatmul); + * ``` + * The inputs and attrs of the node object is based on the TensorFlow op + * registry. + * + * @param name The Tensorflow Op name. + * @param opFunc An op function which is called with the current graph node + * during execution and needs to return a tensor or a list of tensors. The node + * has the following attributes: + * - attr: A map from attribute name to its value + * - inputs: A list of input tensors + */ +/** @doc {heading: 'Models', subheading: 'Op Registry'} */ +function registerOp(name, opFunc) { + const opMapper = { + tfOpName: name, + category: 'custom', + inputs: [], + attrs: [], + customExecutor: opFunc + }; + CUSTOM_OPS[name] = opMapper; +} +/** + * Retrieve the OpMapper object for the registered op. + * + * @param name The Tensorflow Op name. + */ +/** @doc {heading: 'Models', subheading: 'Op Registry'} */ +function getRegisteredOp(name) { + return CUSTOM_OPS[name]; +} +/** + * Deregister the Op for graph model executor. + * + * @param name The Tensorflow Op name. + */ +/** @doc {heading: 'Models', subheading: 'Op Registry'} */ +function deregisterOp(name) { + delete CUSTOM_OPS[name]; +} +//# sourceMappingURL=register.js.map + +/***/ }), +/* 25 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return imag; }); +/* harmony import */ var _engine__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(5); +/* harmony import */ var _kernel_names__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(6); +/* harmony import */ var _tensor_util_env__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(3); +/* harmony import */ var _operation__WEBPACK_IMPORTED_MODULE_3__ = __webpack_require__(4); +/** + * @license + * Copyright 2020 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + + +/** + * Returns the imaginary part of a complex (or real) tensor. + * + * Given a tensor input, this operation returns a tensor of type float that is + * the imaginary part of each element in input considered as a complex number. + * If input is real, a tensor of all zeros is returned. + * + * ```js + * const x = tf.complex([-2.25, 3.25], [4.75, 5.75]); + * tf.imag(x).print(); + * ``` + */ +/** @doc {heading: 'Tensors', subheading: 'Creation'} */ +function imag_(input) { + const $input = Object(_tensor_util_env__WEBPACK_IMPORTED_MODULE_2__[/* convertToTensor */ "a"])(input, 'input', 'imag'); + const forward = (backend) => { + return backend.imag($input); + }; + const inputs = { input: $input }; + return _engine__WEBPACK_IMPORTED_MODULE_0__[/* ENGINE */ "a"].runKernelFunc(forward, inputs, null /* gradient */, _kernel_names__WEBPACK_IMPORTED_MODULE_1__[/* Imag */ "K"]); +} +const imag = Object(_operation__WEBPACK_IMPORTED_MODULE_3__[/* op */ "a"])({ imag_ }); +//# sourceMappingURL=imag.js.map + +/***/ }), +/* 26 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return pool; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return maxPoolPositions; }); +/* harmony import */ var _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(0); +/** + * @license + * Copyright 2020 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + +function pool(xValues, xShape, dtype, strides, convInfo, poolType) { + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padTop = convInfo.padInfo.top; + const padLeft = convInfo.padInfo.left; + const initialValue = (poolType === 'max' ? Number.NEGATIVE_INFINITY : + Number.POSITIVE_INFINITY); + const output = Object(_tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["buffer"])(convInfo.outShape, dtype); + const outputVals = output.values; + const outputBatchStrides = convInfo.outShape[1] * convInfo.outShape[2] * convInfo.outShape[3]; + const outputRowStrides = convInfo.outShape[2] * convInfo.outShape[3]; + const outputColStrides = convInfo.outShape[3]; + for (let b = 0; b < convInfo.batchSize; ++b) { + const outputBatchOffset = b * outputBatchStrides; + const inputBatchOffset = b * strides[0]; + for (let d = 0; d < convInfo.inChannels; ++d) { + for (let yR = 0; yR < convInfo.outHeight; ++yR) { + const xRCorner = yR * strideHeight - padTop; + const xRMin = Math.max(0, xRCorner); + const xRMax = Math.min(convInfo.inHeight, effectiveFilterHeight + xRCorner); + const outputRowOffset = outputBatchOffset + yR * outputRowStrides; + for (let yC = 0; yC < convInfo.outWidth; ++yC) { + const xCCorner = yC * strideWidth - padLeft; + const xCMin = Math.max(0, xCCorner); + const xCMax = Math.min(convInfo.inWidth, effectiveFilterWidth + xCCorner); + let minMaxValue = initialValue; + let avgValue = 0; + let count = 0; + for (let xR = xRMin; xR < xRMax; xR += dilationHeight) { + const xROffset = inputBatchOffset + xR * strides[1]; + for (let xC = xCMin; xC < xCMax; xC += dilationWidth) { + const xCOffset = xROffset + xC * strides[2]; + const pixel = xValues[xCOffset + d]; + if ((poolType === 'max' && pixel > minMaxValue)) { + minMaxValue = pixel; + } + else if (poolType === 'avg') { + avgValue += pixel; + count++; + } + } + if (isNaN(minMaxValue)) { + break; + } + } + const outputOffset = outputRowOffset + yC * outputColStrides + d; + outputVals[outputOffset] = + poolType === 'avg' ? avgValue / count : minMaxValue; + } + } + } + } + return output; +} +function maxPoolPositions(xValues, xShape, dtype, convInfo, flattenPositions = false, includeBatchInIndex = false) { + const maxPositions = Object(_tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["buffer"])(convInfo.outShape, 'int32'); + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padTop = convInfo.padInfo.top; + const padLeft = convInfo.padInfo.left; + const xBuf = Object(_tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["buffer"])(xShape, dtype, xValues); + for (let b = 0; b < convInfo.batchSize; ++b) { + for (let d = 0; d < convInfo.inChannels; ++d) { + for (let yR = 0; yR < convInfo.outHeight; ++yR) { + const xRCorner = yR * strideHeight - padTop; + let xRMin = xRCorner; + while (xRMin < 0) { + xRMin += dilationHeight; + } + // const xRMin = Math.max(0, xRCorner); + const xRMax = Math.min(convInfo.inHeight, effectiveFilterHeight + xRCorner); + for (let yC = 0; yC < convInfo.outWidth; ++yC) { + const xCCorner = yC * strideWidth - padLeft; + let xCMin = xCCorner; + while (xCMin < 0) { + xCMin += dilationWidth; + } + const xCMax = Math.min(convInfo.inWidth, effectiveFilterWidth + xCCorner); + let maxValue = Number.NEGATIVE_INFINITY; + let maxPosition = -1; + for (let xR = xRMin; xR < xRMax; xR += dilationHeight) { + const wR = xR - xRCorner; + for (let xC = xCMin; xC < xCMax; xC += dilationWidth) { + const wC = xC - xCCorner; + const pixel = xBuf.get(b, xR, xC, d); + if (pixel > maxValue) { + maxValue = pixel; + if (flattenPositions) { + maxPosition = includeBatchInIndex ? + ((b * convInfo.inHeight + xR) * convInfo.inWidth + xC) * + convInfo.inChannels + + d : + (xR * convInfo.inWidth + xC) * convInfo.inChannels + d; + } + else { + maxPosition = wR * effectiveFilterWidth + wC; + } + } + } + } + maxPositions.set(maxPosition, b, yR, yC, d); + } + } + } + } + return maxPositions; +} +//# sourceMappingURL=pool_utils.js.map + +/***/ }), +/* 27 */ +/***/ (function(module, exports) { + +var g; + +// This works in non-strict mode +g = (function() { + return this; +})(); + +try { + // This works if eval is allowed (see CSP) + g = g || new Function("return this")(); +} catch (e) { + // This works if the window reference is available + if (typeof window === "object") g = window; +} + +// g can still be undefined, but nothing to do about it... +// We return undefined, instead of nothing here, so it's +// easier to handle this case. if(!global) { ...} + +module.exports = g; + + +/***/ }), +/* 28 */ +/***/ (function(module, exports) { + +module.exports = function(module) { + if (!module.webpackPolyfill) { + module.deprecate = function() {}; + module.paths = []; + // module.parent = undefined by default + if (!module.children) module.children = []; + Object.defineProperty(module, "loaded", { + enumerable: true, + get: function() { + return module.l; + } + }); + Object.defineProperty(module, "id", { + enumerable: true, + get: function() { + return module.i; + } + }); + module.webpackPolyfill = 1; + } + return module; +}; + + +/***/ }), +/* 29 */ +/***/ (function(module, exports) { + +/* WEBPACK VAR INJECTION */(function(__webpack_amd_options__) {/* globals __webpack_amd_options__ */ +module.exports = __webpack_amd_options__; + +/* WEBPACK VAR INJECTION */}.call(this, {})) + +/***/ }), +/* 30 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return transposeImpl; }); +/* harmony import */ var _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(0); +/** + * @license + * Copyright 2020 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + +function transposeImpl(xVals, xShape, dtype, perm, newShape) { + const xRank = xShape.length; + const xSize = _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].sizeFromShape(xShape); + const xStrides = _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].computeStrides(xShape); + const newStrides = _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].computeStrides(newShape); + const result = _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].getTypedArrayFromDType(dtype, _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].sizeFromShape(newShape)); + for (let i = 0; i < xSize; ++i) { + const loc = _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].indexToLoc(i, xRank, xStrides); + // Permute location. + const newLoc = new Array(loc.length); + for (let i = 0; i < newLoc.length; i++) { + newLoc[i] = loc[perm[i]]; + } + const newIndex = _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].locToIndex(newLoc, xRank, newStrides); + result[newIndex] = xVals[i]; + } + return result; +} +//# sourceMappingURL=Transpose_impl.js.map + +/***/ }), +/* 31 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; + +// EXPORTS +__webpack_require__.d(__webpack_exports__, "a", function() { return /* reexport */ backend_cpu_MathBackendCPU; }); +__webpack_require__.d(__webpack_exports__, "c", function() { return /* reexport */ version; }); +__webpack_require__.d(__webpack_exports__, "b", function() { return /* reexport */ shared_namespaceObject; }); + +// NAMESPACE OBJECT: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/shared.js +var shared_namespaceObject = {}; +__webpack_require__.r(shared_namespaceObject); +__webpack_require__.d(shared_namespaceObject, "maxImpl", function() { return Max_impl["a" /* maxImpl */]; }); +__webpack_require__.d(shared_namespaceObject, "transposeImpl", function() { return Transpose_impl["a" /* transposeImpl */]; }); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/kernels/Max_impl.js +var Max_impl = __webpack_require__(37); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/kernels/Transpose_impl.js +var Transpose_impl = __webpack_require__(30); + +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/shared.js +/** + * @license + * Copyright 2020 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +// Shared kernel impls for use in other backends. + + +//# sourceMappingURL=shared.js.map +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-core/dist/index.js + 269 modules +var dist = __webpack_require__(0); + +// EXTERNAL MODULE: ./node_modules/seedrandom/index.js +var seedrandom = __webpack_require__(20); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/cpu_util.js +var cpu_util = __webpack_require__(9); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/utils/pool_utils.js +var pool_utils = __webpack_require__(26); + +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/backend_cpu.js +/** + * @license + * Copyright 2017 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + + + +const nonMaxSuppressionV3 = dist["kernel_impls"].nonMaxSuppressionV3; +const split = dist["kernel_impls"].split; +const tile = dist["kernel_impls"].tile; +const topkImpl = dist["kernel_impls"].topkImpl; +const whereImpl = dist["kernel_impls"].whereImpl; + + + +function mapActivation(backend, x, activation, preluActivationWeights) { + if (activation === 'linear') { + return backend.linear(x); + } + else if (activation === 'relu') { + return backend.relu(x); + } + else if (activation === 'elu') { + return backend.elu(x); + } + else if (activation === 'relu6') { + return backend.relu6(x); + } + else if (activation === 'prelu') { + return backend.prelu(x, preluActivationWeights); + } + throw new Error(`Activation ${activation} has not been implemented for the CPU backend.`); +} +class backend_cpu_MathBackendCPU extends dist["KernelBackend"] { + constructor() { + super(); + this.blockSize = 48; + this.firstUse = true; + this.data = new dist["DataStorage"](this, Object(dist["engine"])()); + } + write(values, shape, dtype) { + if (this.firstUse) { + this.firstUse = false; + if (Object(dist["env"])().get('IS_NODE')) { + dist["backend_util"].warn('\n============================\n' + + 'Hi there 👋. Looks like you are running TensorFlow.js in ' + + 'Node.js. To speed things up dramatically, install our node ' + + 'backend, which binds to TensorFlow C++, by running ' + + 'npm i @tensorflow/tfjs-node, ' + + 'or npm i @tensorflow/tfjs-node-gpu if you have CUDA. ' + + 'Then call require(\'@tensorflow/tfjs-node\'); (-gpu ' + + 'suffix for CUDA) at the start of your program. ' + + 'Visit https://github.com/tensorflow/tfjs-node for more details.' + + '\n============================'); + } + } + const dataId = {}; + this.data.set(dataId, { values, dtype }); + return dataId; + } + move(dataId, values, shape, dtype) { + this.data.set(dataId, { values, dtype }); + } + numDataIds() { + return this.data.numDataIds(); + } + async read(dataId) { + return this.readSync(dataId); + } + readSync(dataId) { + const { dtype, complexTensors } = this.data.get(dataId); + if (dtype === 'complex64') { + const realValues = this.readSync(complexTensors.real.dataId); + const imagValues = this.readSync(complexTensors.imag.dataId); + return dist["backend_util"].mergeRealAndImagArrays(realValues, imagValues); + } + return this.data.get(dataId).values; + } + bufferSync(t) { + const data = this.readSync(t.dataId); + let decodedData = data; + if (t.dtype === 'string') { + try { + // Decode the bytes into string. + decodedData = data.map(d => dist["util"].decodeString(d)); + } + catch (_a) { + throw new Error('Failed to decode encoded string bytes into utf-8'); + } + } + return dist["buffer"](t.shape, t.dtype, decodedData); + } + makeOutput(values, shape, dtype) { + const dataId = this.write(values, shape, dtype); + return Object(dist["engine"])().makeTensorFromDataId(dataId, shape, dtype, this); + } + disposeData(dataId) { + if (this.data.has(dataId)) { + const { complexTensors } = this.data.get(dataId); + if (complexTensors != null) { + complexTensors.real.dispose(); + complexTensors.imag.dispose(); + } + this.data.delete(dataId); + } + } + async time(f) { + const start = dist["util"].now(); + f(); + const kernelMs = dist["util"].now() - start; + return { kernelMs }; + } + memory() { + return { + // Unreliable due to automatic gc. The numbers above are cumulative. + unreliable: true, + reasons: ['The reported memory is an upper bound. Due to automatic garbage ' + + 'collection, the true allocated memory may be less.'] + }; + } + complex(real, imag) { + const result = this.makeOutput(null, real.shape, 'complex64'); + const resultData = this.data.get(result.dataId); + // The backend owns the reference to the underlying real and imaginary + // clones. These will explicitly get disposed when the complex tensor is + // disposed. + resultData.complexTensors = { + real: Object(dist["engine"])().keep(real.clone()), + imag: Object(dist["engine"])().keep(imag.clone()) + }; + return result; + } + real(input) { + const resultData = this.data.get(input.dataId); + return resultData.complexTensors.real.clone(); + } + imag(input) { + const resultData = this.data.get(input.dataId); + return resultData.complexTensors.imag.clone(); + } + slice(x, begin, size) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'slice'); + const isContinous = dist["slice_util"].isSliceContinous(x.shape, begin, size); + if (isContinous) { + const flatOffset = dist["slice_util"].computeFlatOffset(begin, x.strides); + const length = dist["util"].sizeFromShape(size); + const vals = this.readSync(x.dataId); + return dist["tensor"](vals.subarray(flatOffset, flatOffset + length), size, x.dtype); + } + const buffer = dist["buffer"](size, x.dtype); + const xBuf = this.bufferSync(x); + for (let i = 0; i < buffer.size; ++i) { + const loc = buffer.indexToLoc(i); + const xLoc = loc.map((idx, j) => idx + begin[j]); + buffer.values[i] = xBuf.get(...xLoc); + } + return buffer.toTensor(); + } + stridedSlice(x, begin, end, strides) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'stridedSlice'); + const outShape = dist["slice_util"].computeOutShape(begin, end, strides); + if (outShape.some(axis => axis === 0)) { + return dist["tensor"]([], outShape); + } + const buffer = dist["buffer"](outShape, x.dtype); + const xBuf = this.bufferSync(x); + for (let i = 0; i < buffer.size; i++) { + const loc = buffer.indexToLoc(i); + const newLoc = new Array(loc.length); + for (let j = 0; j < newLoc.length; j++) { + newLoc[j] = loc[j] * strides[j] + begin[j]; + } + buffer.set(xBuf.get(...newLoc), ...loc); + } + return buffer.toTensor(); + } + diag(x) { + const xVals = this.readSync(x.dataId); + const buffer = dist["buffer"]([x.size, x.size], x.dtype); + const vals = buffer.values; + for (let i = 0; i < xVals.length; i++) { + vals[i * x.size + i] = xVals[i]; + } + return buffer.toTensor(); + } + unstack(x, axis) { + const num = x.shape[axis]; + const outShape = new Array(x.rank - 1); + let outIndex = 0; + for (let i = 0; i < x.rank; i++) { + if (i !== axis) { + outShape[outIndex++] = x.shape[i]; + } + } + const begin = new Array(x.rank).fill(0); + const size = x.shape.slice(); + size[axis] = 1; + const res = new Array(num); + for (let i = 0; i < res.length; i++) { + begin[axis] = i; + res[i] = this.slice(x, begin, size).reshape(outShape); + } + return res; + } + reverse(x, axis) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'reverse'); + const buffer = dist["buffer"](x.shape, x.dtype); + const xBuf = this.bufferSync(x); + for (let i = 0; i < buffer.size; i++) { + const outLoc = buffer.indexToLoc(i); + const inLoc = outLoc.slice(); + axis.forEach(ax => inLoc[ax] = x.shape[ax] - 1 - inLoc[ax]); + buffer.set(xBuf.get(...inLoc), ...outLoc); + } + return buffer.toTensor(); + } + concat(tensors, axis) { + if (tensors[0].dtype === 'complex64') { + const reals = tensors.map((t) => dist["real"](t)); + const imags = tensors.map((t) => dist["imag"](t)); + return dist["complex"](this.concat(reals, axis), this.concat(imags, axis)); + } + const tensors2D = tensors.map(t => { + const innerSize = dist["util"].sizeFromShape(t.shape.slice(axis)); + return t.as2D(-1, innerSize); + }); + const outShape = dist["backend_util"].computeOutShape(tensors2D.map(t => t.shape), 1 /* axis + */); + const values = dist["buffer"](outShape, tensors[0].dtype) + .values; + if (tensors2D[0].shape[0] === 1) { + // Use built-in TypedArray.set() method for speed. + let offset = 0; + tensors2D.forEach(t => { + values.set(this.readSync(t.dataId), offset); + offset += t.size; + }); + } + else { + let colOffset = 0; + tensors2D.forEach(t => { + const tVals = this.readSync(t.dataId); + let tIdx = 0; + for (let row = 0; row < t.shape[0]; ++row) { + const resIdx = row * outShape[1] + colOffset; + for (let col = 0; col < t.shape[1]; ++col) { + values[resIdx + col] = tVals[tIdx++]; + } + } + colOffset += t.shape[1]; + }); + } + const finalOutShape = dist["backend_util"].computeOutShape(tensors.map(t => t.shape), axis); + return dist["tensor"](values, finalOutShape, tensors[0].dtype); + } + neg(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'neg'); + return this.multiply(dist["scalar"](-1), x); + } + add(a, b) { + if (a.dtype === 'complex64' || b.dtype === 'complex64') { + return this.broadcastedBinaryComplexOp(a.cast('complex64'), b.cast('complex64'), (aReal, aImag, bReal, bImag) => { + return { real: aReal + bReal, imag: aImag + bImag }; + }); + } + return this.broadcastedBinaryOp(a, b, Object(dist["upcastType"])(a.dtype, b.dtype), (aValue, bValue) => aValue + bValue); + } + addN(tensors) { + Object(cpu_util["a" /* assertNotComplex */])(tensors, 'addN'); + const vals = tensors.map(t => this.readSync(t.dataId)); + const result = dist["buffer"](tensors[0].shape, tensors[0].dtype); + const resultVals = result.values; + for (let i = 0; i < tensors.length; i++) { + const currVals = vals[i]; + for (let j = 0; j < resultVals.length; j++) { + resultVals[j] += currVals[j]; + } + } + return result.toTensor(); + } + softmax(logits, dim) { + const axes = dist["util"].parseAxisParam([dim], logits.shape); + // TODO(annxingyuan): Call maxImpl rather than op as part of softmax kernel + // modularization. + const maxLogit = Object(dist["max"])(logits, axes); + const expandedShape = dist["backend_util"].expandShapeToKeepDim(maxLogit.shape, axes); + const a = this.subtract(logits, maxLogit.reshape(expandedShape)); + const b = this.exp(a); + const sumExp = this.sum(b, axes).reshape(expandedShape); + // TODO(annxingyuan): Call divImpl rather than op as part of softmax + // kernel modularization. + return dist["div"](b, sumExp); + } + subtract(a, b) { + if (a.dtype === 'complex64' || b.dtype === 'complex64') { + return this.broadcastedBinaryComplexOp(a.cast('complex64'), b.cast('complex64'), (aReal, aImag, bReal, bImag) => { + return { real: aReal - bReal, imag: aImag - bImag }; + }); + } + return this.broadcastedBinaryOp(a, b, Object(dist["upcastType"])(a.dtype, b.dtype), (aValue, bValue) => aValue - bValue); + } + pow(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'pow'); + return this.broadcastedBinaryOp(a, b, a.dtype, (aValue, bValue) => Math.pow(aValue, bValue)); + } + batchMatMul(a, b, transposeA, transposeB) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'matMul'); + const sharedDim = transposeA ? a.shape[1] : a.shape[2]; + const leftDim = transposeA ? a.shape[2] : a.shape[1]; + const rightDim = transposeB ? b.shape[1] : b.shape[2]; + const batchDim = a.shape[0]; + const aValues = this.readSync(a.dataId); + const bValues = this.readSync(b.dataId); + const [aBatch, aOuterStep, aInnerStep] = transposeA ? + [a.strides[0], 1, a.strides[1]] : + [a.strides[0], a.strides[1], 1]; + const [bInnerStep, bOuterStep, bBatch] = transposeB ? + [1, b.strides[1], b.strides[0]] : + [b.strides[1], 1, b.strides[0]]; + const size = leftDim * rightDim; + const result = dist["buffer"]([batchDim, leftDim, rightDim], a.dtype); + const resVals = result.values; + const blockSize = this.blockSize; + for (let b = 0; b < batchDim; b++) { + for (let i0 = 0; i0 < leftDim; i0 += blockSize) { + for (let j0 = 0; j0 < rightDim; j0 += blockSize) { + for (let k0 = 0; k0 < sharedDim; k0 += blockSize) { + // for when blockSize doesn't evenly divide the input + const iBlock = Math.min(i0 + blockSize, leftDim); + const jBlock = Math.min(j0 + blockSize, rightDim); + const kBlock = Math.min(k0 + blockSize, sharedDim); + for (let i = i0; i < iBlock; i++) { + for (let j = j0; j < jBlock; j++) { + let sum = 0.0; + for (let k = k0; k < kBlock; k++) { + sum += aValues[b * aBatch + i * aOuterStep + k * aInnerStep] * + bValues[k * bInnerStep + j * bOuterStep + b * bBatch]; + } + resVals[b * size + (i * rightDim + j)] += sum; + } + } + } + } + } + } + return result.toTensor(); + } + fusedBatchMatMul({ a, b, transposeA, transposeB, bias, activation, preluActivationWeights }) { + let result = this.batchMatMul(a, b, transposeA, transposeB); + if (bias) { + result = this.add(result, bias); + } + if (activation) { + result = + mapActivation(this, result, activation, preluActivationWeights); + } + return result; + } + multiply(a, b) { + if (a.dtype === 'complex64' || b.dtype === 'complex64') { + return this.broadcastedBinaryComplexOp(a.cast('complex64'), b.cast('complex64'), (aReal, aImag, bReal, bImag) => { + return { + real: aReal * bReal - aImag * bImag, + imag: aReal * bImag + aImag * bReal + }; + }); + } + return this.broadcastedBinaryOp(a, b, Object(dist["upcastType"])(a.dtype, b.dtype), (aValue, bValue) => aValue * bValue); + } + floorDiv(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'floorDiv'); + const op = (a, b) => Math.floor(a / b); + const outputDtype = 'int32'; + return this.broadcastedBinaryOp(a, b, outputDtype, op); + } + sum(x, axes) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'sum'); + dist["backend_util"].assertAxesAreInnerMostDims('sum', axes, x.rank); + const [outShape, reduceShape] = dist["backend_util"].computeOutAndReduceShapes(x.shape, axes); + const resultDtype = Object(dist["upcastType"])(x.dtype, 'int32'); + const result = dist["zeros"](outShape, resultDtype); + const reduceSize = dist["util"].sizeFromShape(reduceShape); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let sum = 0; + for (let j = 0; j < reduceSize; ++j) { + sum += aVals[offset + j]; + } + vals[i] = sum; + } + return result; + } + prod(x, axes) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'sum'); + const [outShape, reduceShape] = dist["backend_util"].computeOutAndReduceShapes(x.shape, axes); + const resultDtype = Object(dist["upcastType"])(x.dtype, 'int32'); + const result = dist["zeros"](outShape, resultDtype); + const reduceSize = dist["util"].sizeFromShape(reduceShape); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let prod = 1; + for (let j = 0; j < reduceSize; ++j) { + prod *= aVals[offset + j]; + } + vals[i] = prod; + } + return result; + } + unsortedSegmentSum(x, segmentIds, numSegments) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'unsortedSegmentSum'); + const res = []; + // Reshape the segment id's so that they can be broadcast with + // x. The new shape should be [segmentIds.shape, 1, ..., 1] + const numIters = x.rank - segmentIds.rank; + for (let i = 0; i < numIters; ++i) { + segmentIds = segmentIds.expandDims(i + 1); + } + for (let i = 0; i < numSegments; ++i) { + const segmentId = dist["scalar"](i, 'int32'); + const mask = dist["equal"](segmentId, segmentIds).asType('float32'); + const sum = mask.mul(x).sum(0); + res.push(sum); + } + return dist["stack"](res); + } + argMin(x, axis) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'argMin'); + const axes = [axis]; + dist["backend_util"].assertAxesAreInnerMostDims('argMin', axes, x.rank); + const [outShape, reduceShape] = dist["backend_util"].computeOutAndReduceShapes(x.shape, axes); + const result = dist["zeros"](outShape, 'int32'); + const reduceSize = dist["util"].sizeFromShape(reduceShape); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let min = aVals[offset]; + let minIndex = 0; + for (let j = 0; j < reduceSize; ++j) { + const value = aVals[offset + j]; + if (value < min) { + min = value; + minIndex = j; + } + } + vals[i] = minIndex; + } + return result; + } + argMax(x, axis) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'argMax'); + const axes = [axis]; + dist["backend_util"].assertAxesAreInnerMostDims('argMax', axes, x.rank); + const [outShape, reduceShape] = dist["backend_util"].computeOutAndReduceShapes(x.shape, axes); + const result = dist["zeros"](outShape, 'int32'); + const reduceSize = dist["util"].sizeFromShape(reduceShape); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let max = aVals[offset]; + let maxIndex = 0; + for (let j = 0; j < reduceSize; ++j) { + const value = aVals[offset + j]; + if (value > max) { + max = value; + maxIndex = j; + } + } + vals[i] = maxIndex; + } + return result; + } + cumsum(x, axis, exclusive, reverse) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'cumsum'); + if (axis !== x.rank - 1) { + throw new Error(`backend.cumsum in CPU expects an inner-most axis=${x.rank - 1} ` + + `but got axis=${axis}`); + } + const resultDtype = Object(dist["upcastType"])(x.dtype, 'int32'); + const result = dist["zeros"](x.shape, resultDtype); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + const finalDim = x.shape[x.rank - 1]; + const indexAdjuster = reverse ? + (i, j) => i + finalDim - j - 1 : + (i, j) => i + j; + for (let i = 0; i < aVals.length; i += finalDim) { + for (let j = 0; j < finalDim; j++) { + const idx = indexAdjuster(i, j); + if (j === 0) { + vals[idx] = exclusive ? 0 : aVals[idx]; + } + else { + const prevIdx = indexAdjuster(i, j - 1); + vals[idx] = exclusive ? aVals[prevIdx] + vals[prevIdx] : + aVals[idx] + vals[prevIdx]; + } + } + } + return result; + } + equal(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'equal'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return (aVal === bVal) ? 1 : 0; + }); + } + notEqual(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'notEqual'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return (aVal !== bVal) ? 1 : 0; + }); + } + less(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'less'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return (aVal < bVal) ? 1 : 0; + }); + } + lessEqual(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'lessEqual'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return (aVal <= bVal) ? 1 : 0; + }); + } + greater(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'greater'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return (aVal > bVal) ? 1 : 0; + }); + } + greaterEqual(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'greaterEqual'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return (aVal >= bVal) ? 1 : 0; + }); + } + logicalNot(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'logicalNot'); + const values = this.readSync(x.dataId); + const newValues = new Uint8Array(values.length); + for (let i = 0; i < values.length; ++i) { + newValues[i] = values[i] ? 0 : 1; + } + return this.makeOutput(newValues, x.shape, 'bool'); + } + logicalAnd(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'logicalAnd'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return aVal && bVal; + }); + } + logicalOr(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'logicalOr'); + return this.broadcastedBinaryOp(a, b, 'bool', (aVal, bVal) => { + return aVal || bVal; + }); + } + select(condition, a, b) { + Object(cpu_util["a" /* assertNotComplex */])([condition, a, b], 'select'); + const values = this.readSync(condition.dataId); + const aValues = this.readSync(a.dataId); + const bValues = this.readSync(b.dataId); + const result = dist["zeros"](a.shape, Object(dist["upcastType"])(a.dtype, b.dtype)); + const newValues = this.readSync(result.dataId); + let index = 0; + const offset = condition.rank === 0 || condition.rank > 1 || a.rank === 1 ? + 1 : + dist["util"].sizeFromShape(a.shape.slice(1)); + for (let i = 0; i < values.length; i++) { + for (let j = 0; j < offset; j++) { + if (values[i] === 1) { + newValues[index++] = aValues[i]; + } + else { + newValues[index++] = bValues[i]; + } + } + } + return result; + } + where(condition) { + Object(cpu_util["a" /* assertNotComplex */])([condition], 'where'); + const condVals = this.readSync(condition.dataId); + return whereImpl(condition.shape, condVals); + } + topk(x, k, sorted) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'topk'); + const xVals = this.readSync(x.dataId); + return topkImpl(xVals, x.shape, x.dtype, k, sorted); + } + min(x, axes) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'min'); + dist["backend_util"].assertAxesAreInnerMostDims('min', axes, x.rank); + const [outShape, reduceShape] = dist["backend_util"].computeOutAndReduceShapes(x.shape, axes); + const result = dist["zeros"](outShape, x.dtype); + const reduceSize = dist["util"].sizeFromShape(reduceShape); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let min = aVals[offset]; + for (let j = 0; j < reduceSize; ++j) { + const value = aVals[offset + j]; + if (value < min) { + min = value; + } + } + vals[i] = min; + } + return result; + } + minimum(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'minimum'); + return this.broadcastedBinaryOp(a, b, a.dtype, (aVal, bVal) => Math.min(aVal, bVal)); + } + mod(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'mod'); + return this.broadcastedBinaryOp(a, b, a.dtype, (aVal, bVal) => { + const rem = aVal % bVal; + if ((aVal < 0 && bVal < 0) || (aVal >= 0 && bVal >= 0)) { + return rem; + } + else { + return (rem + bVal) % bVal; + } + }); + } + maximum(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'maximum'); + return this.broadcastedBinaryOp(a, b, a.dtype, (aVal, bVal) => Math.max(aVal, bVal)); + } + all(x, axes) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'all'); + dist["backend_util"].assertAxesAreInnerMostDims('all', axes, x.rank); + const [outShape, reduceShape] = dist["backend_util"].computeOutAndReduceShapes(x.shape, axes); + const result = dist["zeros"](outShape, x.dtype); + const reduceSize = dist["util"].sizeFromShape(reduceShape); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let all = aVals[offset]; + for (let j = 0; j < reduceSize; ++j) { + const value = aVals[offset + j]; + all = all && value; + } + vals[i] = all; + } + return result; + } + any(x, axes) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'any'); + dist["backend_util"].assertAxesAreInnerMostDims('any', axes, x.rank); + const [outShape, reduceShape] = dist["backend_util"].computeOutAndReduceShapes(x.shape, axes); + const result = dist["zeros"](outShape, x.dtype); + const reduceSize = dist["util"].sizeFromShape(reduceShape); + const vals = this.readSync(result.dataId); + const aVals = this.readSync(x.dataId); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let anyVal = aVals[offset]; + for (let j = 0; j < reduceSize; ++j) { + const value = aVals[offset + j]; + anyVal = anyVal || value; + } + vals[i] = anyVal; + } + return result; + } + squaredDifference(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'squaredDifference'); + return this.broadcastedBinaryOp(a, b, a.dtype, (aVal, bVal) => { + const diff = aVal - bVal; + return diff * diff; + }); + } + ceil(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'ceil'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + newValues[i] = Math.ceil(values[i]); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + floor(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'floor'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + newValues[i] = Math.floor(values[i]); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + sign(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'x'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + if (values[i] < 0) { + newValues[i] = -1; + } + else if (values[i] > 0) { + newValues[i] = 1; + } + else { + newValues[i] = 0; + } + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + isNaN(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'x'); + const values = this.readSync(x.dataId); + const newValues = new Uint8Array(values.length); + for (let i = 0; i < values.length; ++i) { + if (Number.isNaN(values[i])) { + newValues[i] = 1; + } + } + return this.makeOutput(newValues, x.shape, 'bool'); + } + isInf(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'x'); + const values = this.readSync(x.dataId); + const newValues = new Uint8Array(values.length); + for (let i = 0; i < values.length; ++i) { + if (Math.abs(values[i]) === Infinity) { + newValues[i] = 1; + } + } + return this.makeOutput(newValues, x.shape, 'bool'); + } + isFinite(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'x'); + const values = this.readSync(x.dataId); + const newValues = new Uint8Array(values.length); + for (let i = 0; i < values.length; ++i) { + if (Number.isFinite(values[i])) { + newValues[i] = 1; + } + } + return this.makeOutput(newValues, x.shape, 'bool'); + } + round(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'round'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + // The algorithm is based on banker's rounding. + const base = Math.floor(values[i]); + if (values[i] - base < 0.5) { + newValues[i] = Math.floor(values[i]); + } + else if (values[i] - base > 0.5) { + newValues[i] = Math.ceil(values[i]); + } + else { + if (base % 2.0 === 0.0) { + newValues[i] = base; + } + else { + newValues[i] = base + 1.0; + } + } + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + exp(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'exp'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + newValues[i] = Math.exp(values[i]); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + expm1(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'expm1'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + newValues[i] = Math.expm1(values[i]); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + log(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'log'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + const value = values[i]; + newValues[i] = Math.log(value); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + log1p(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'log1p'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + const value = values[i]; + newValues[i] = Math.log1p(value); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + sqrt(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'sqrt'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + const value = values[i]; + newValues[i] = Math.sqrt(value); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + rsqrt(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'rsqrt'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + const value = values[i]; + newValues[i] = 1 / Math.sqrt(value); + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + reciprocal(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'reciprocal'); + const values = this.readSync(x.dataId); + const newValues = new Float32Array(values.length); + for (let i = 0; i < values.length; ++i) { + newValues[i] = 1 / values[i]; + } + return this.makeOutput(newValues, x.shape, 'float32'); + } + linear(x) { + return x; + } + relu(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'relu'); + const res = dist["zeros"](x.shape, x.dtype); + const resVals = this.readSync(res.dataId); + const inVals = this.readSync(x.dataId); + for (let i = 0; i < inVals.length; ++i) { + resVals[i] = Math.max(0, inVals[i]); + } + return res; + } + relu6(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'relu'); + const res = dist["zeros"](x.shape, x.dtype); + const resVals = this.readSync(res.dataId); + const inVals = this.readSync(x.dataId); + for (let i = 0; i < inVals.length; ++i) { + resVals[i] = Math.min(Math.max(0, inVals[i]), 6); + } + return res; + } + prelu(x, a) { + Object(cpu_util["a" /* assertNotComplex */])([x, a], 'prelu'); + return this.broadcastedBinaryOp(x, a, x.dtype, (xValue, aValue) => xValue < 0 ? aValue * xValue : xValue); + } + elu(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'elu'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + const v = values[i]; + if (v >= 0) { + resultValues[i] = v; + } + else { + resultValues[i] = (Math.exp(v) - 1); + } + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + eluDer(dy, y) { + Object(cpu_util["a" /* assertNotComplex */])([dy, y], 'eluDer'); + const resultValues = new Float32Array(y.size); + const values = this.readSync(y.dataId); + const dyValues = this.readSync(dy.dataId); + for (let i = 0; i < values.length; ++i) { + const v = values[i]; + if (v >= 1) { + resultValues[i] = dyValues[i]; + } + else { + resultValues[i] = dyValues[i] * (v + 1); + } + } + return this.makeOutput(resultValues, y.shape, 'float32'); + } + selu(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'selu'); + // Stable and Attracting Fixed Point (0, 1) for Normalized Weights. + // see: https://arxiv.org/abs/1706.02515 + const scaleAlpha = dist["backend_util"].SELU_SCALEALPHA; + const scale = dist["backend_util"].SELU_SCALE; + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + const v = values[i]; + if (v >= 0) { + resultValues[i] = scale * v; + } + else { + resultValues[i] = scaleAlpha * (Math.exp(v) - 1); + } + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + clip(x, min, max) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'clip'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + const v = values[i]; + resultValues[i] = v > max ? max : (v < min ? min : v); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + abs(x) { + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.abs(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + complexAbs(x) { + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < x.size; ++i) { + const real = values[i * 2]; + const imag = values[i * 2 + 1]; + resultValues[i] = Math.hypot(real, imag); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + int(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'int'); + const resultValues = new Int32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = values[i]; + } + return this.makeOutput(resultValues, x.shape, 'int32'); + } + sigmoid(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'sigmoid'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = 1 / (1 + Math.exp(-values[i])); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + softplus(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'softplus'); + // mirrors the implementation of tf.nn.softplus: https://goo.gl/vkcvwX + // epsilon is the difference between 1.0 and the next representable float. + // For a single precision 32 bit float this should be 2^-23, see: + // https://math.byu.edu/~schow/work/IEEEFloatingPoint.htm + const epsilon = 1.1920928955078125e-7; + const threshold = Math.log(epsilon) + 2.0; + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + // Value above which exp(x) may overflow, but softplus(x) == x + // is within machine epsilon. + const tooLarge = values[i] > -threshold; + // Value below which exp(x) may underflow, but softplus(x) == exp(x) + // is within machine epsilon. + const tooSmall = values[i] < threshold; + const expX = Math.exp(values[i]); + let result; + if (tooSmall) { + result = expX; + } + else if (tooLarge) { + result = values[i]; + } + else { + result = Math.log(1.0 + expX); + } + resultValues[i] = result; + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + sin(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'sin'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.sin(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + cos(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'cos'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.cos(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + tan(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'tan'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.tan(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + asin(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'asin'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.asin(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + acos(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'acos'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.acos(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + atan(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'atan'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.atan(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + atan2(a, b) { + Object(cpu_util["a" /* assertNotComplex */])([a, b], 'atan2'); + return this.broadcastedBinaryOp(a, b, a.dtype, (aValue, bValue) => Math.atan2(aValue, bValue)); + } + sinh(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'sinh'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.sinh(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + cosh(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'cosh'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.cosh(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + tanh(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'tanh'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = dist["util"].tanh(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + asinh(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'asinh'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.asinh(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + acosh(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'acosh'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.acosh(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + atanh(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'atanh'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + resultValues[i] = Math.atanh(values[i]); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + erf(x) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'erf'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + const p = dist["backend_util"].ERF_P; + const a1 = dist["backend_util"].ERF_A1; + const a2 = dist["backend_util"].ERF_A2; + const a3 = dist["backend_util"].ERF_A3; + const a4 = dist["backend_util"].ERF_A4; + const a5 = dist["backend_util"].ERF_A5; + for (let i = 0; i < values.length; ++i) { + const sign = Math.sign(values[i]); + const v = Math.abs(values[i]); + const t = 1.0 / (1.0 + p * v); + resultValues[i] = sign * + (1.0 - + (((((a5 * t + a4) * t) + a3) * t + a2) * t + a1) * t * + Math.exp(-v * v)); + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + step(x, alpha = 0) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'step'); + const resultValues = new Float32Array(x.size); + const values = this.readSync(x.dataId); + for (let i = 0; i < values.length; ++i) { + const value = values[i]; + if (isNaN(value)) { + resultValues[i] = NaN; + } + else { + resultValues[i] = value > 0 ? 1 : alpha; + } + } + return this.makeOutput(resultValues, x.shape, 'float32'); + } + fusedConv2d({ input, filter, convInfo, bias, activation, preluActivationWeights }) { + let result = this.conv2d(input, filter, convInfo); + if (bias) { + result = this.add(result, bias); + } + if (activation) { + result = + mapActivation(this, result, activation, preluActivationWeights); + } + return result; + } + conv2d(x, filter, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([x, filter], 'conv2d'); + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const padLeft = convInfo.padInfo.left; + const padTop = convInfo.padInfo.top; + const isChannelsLast = convInfo.dataFormat === 'channelsLast'; + const y = dist["buffer"](convInfo.outShape, x.dtype); + const xBatchStride = x.strides[0]; + const xRowStride = isChannelsLast ? x.strides[1] : x.strides[2]; + const xColStride = isChannelsLast ? x.strides[2] : 1; + const xChannelStride = isChannelsLast ? 1 : x.strides[1]; + const yBatchStride = y.strides[0]; + const yRowStride = isChannelsLast ? y.strides[1] : y.strides[2]; + const yColStride = isChannelsLast ? y.strides[2] : 1; + const yChannelStride = isChannelsLast ? 1 : y.strides[1]; + const xVals = this.readSync(x.dataId); + const wVals = this.readSync(filter.dataId); + const yVals = y.values; + for (let b = 0; b < convInfo.batchSize; ++b) { + const xOffset1 = b * xBatchStride; + const yOffset1 = b * yBatchStride; + for (let yR = 0; yR < convInfo.outHeight; ++yR) { + const yOffset2 = yOffset1 + yR * yRowStride; + const xRCorner = yR * convInfo.strideHeight - padTop; + for (let wR = 0; wR < filterHeight; wR++) { + const xR = xRCorner + wR * dilationHeight; + if (xR < 0 || xR >= convInfo.inHeight) { + continue; + } + const wOffset1 = wR * filter.strides[0]; + const xOffset2 = xOffset1 + xR * xRowStride; + for (let yC = 0; yC < convInfo.outWidth; ++yC) { + const yOffset3 = yOffset2 + yC * yColStride; + const xCCorner = yC * convInfo.strideWidth - padLeft; + for (let wC = 0; wC < filterWidth; wC++) { + const xC = xCCorner + wC * dilationWidth; + if (xC < 0 || xC >= convInfo.inWidth) { + continue; + } + const wOffset2 = wOffset1 + wC * filter.strides[1]; + const xOffset3 = xOffset2 + xC * xColStride; + let wOffset3 = wOffset2; + for (let d1 = 0; d1 < convInfo.inChannels; ++d1) { + const xVal = xVals[xOffset3 + d1 * xChannelStride]; + for (let d2 = 0; d2 < convInfo.outChannels; ++d2) { + yVals[yOffset3 + d2 * yChannelStride] += + xVal * wVals[wOffset3 + d2]; + } + wOffset3 += convInfo.outChannels; + } + } + } + } + } + } + return y.toTensor(); + } + conv3d(x, filter, convInfo) { + const filterDepth = convInfo.filterDepth; + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const dilationDepth = convInfo.dilationDepth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const padFront = convInfo.padInfo.front; + const padLeft = convInfo.padInfo.left; + const padTop = convInfo.padInfo.top; + const y = dist["buffer"](convInfo.outShape, x.dtype); + const xVals = this.readSync(x.dataId); + const wVals = this.readSync(filter.dataId); + const yVals = y.values; + for (let b = 0; b < convInfo.batchSize; ++b) { + const xOffset1 = b * x.strides[0]; + const yOffset1 = b * y.strides[0]; + for (let yF = 0; yF < convInfo.outDepth; ++yF) { + const yOffset2 = yOffset1 + yF * y.strides[1]; + const xFCorner = yF * convInfo.strideDepth - padFront; + for (let wF = 0; wF < filterDepth; wF++) { + const xF = xFCorner + wF * dilationDepth; + if (xF < 0 || xF >= convInfo.inDepth) { + continue; + } + const wOffset1 = wF * filter.strides[0]; + const xOffset2 = xOffset1 + xF * x.strides[1]; + for (let yR = 0; yR < convInfo.outHeight; ++yR) { + const yOffset3 = yOffset2 + yR * y.strides[2]; + const xRCorner = yR * convInfo.strideHeight - padTop; + for (let wR = 0; wR < filterHeight; wR++) { + const xR = xRCorner + wR * dilationHeight; + if (xR < 0 || xR >= convInfo.inHeight) { + continue; + } + const wOffset2 = wOffset1 + wR * filter.strides[1]; + const xOffset3 = xOffset2 + xR * x.strides[2]; + for (let yC = 0; yC < convInfo.outWidth; ++yC) { + const yOffset4 = yOffset3 + yC * convInfo.outChannels; + const xCCorner = yC * convInfo.strideWidth - padLeft; + for (let wC = 0; wC < filterWidth; wC++) { + const xC = xCCorner + wC * dilationWidth; + if (xC < 0 || xC >= convInfo.inWidth) { + continue; + } + const wOffset3 = wOffset2 + wC * filter.strides[2]; + const xOffset4 = xOffset3 + xC * convInfo.inChannels; + let wOffset4 = wOffset3; + for (let d1 = 0; d1 < convInfo.inChannels; ++d1) { + const xVal = xVals[xOffset4 + d1]; + for (let d2 = 0; d2 < convInfo.outChannels; ++d2) { + yVals[yOffset4 + d2] += xVal * wVals[wOffset4 + d2]; + } + wOffset4 += convInfo.outChannels; + } + } + } + } + } + } + } + } + return y.toTensor(); + } + conv2dDerInput(dy, filter, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([dy, filter], 'conv2dDerInput'); + const dx = dist["buffer"](convInfo.inShape, 'float32'); + const dxValues = dx.values; + const dyValues = this.readSync(dy.dataId); + const fltValues = this.readSync(filter.dataId); + const [fltS0, fltS1, fltS2] = filter.strides; + const { batchSize, filterHeight, filterWidth, inChannels, inHeight, inWidth, outChannels, outHeight, outWidth, strideHeight, strideWidth, dataFormat } = convInfo; + const topPad = filterHeight - 1 - convInfo.padInfo.top; + const leftPad = filterWidth - 1 - convInfo.padInfo.left; + const isChannelsLast = dataFormat === 'channelsLast'; + const xBatchStride = dx.strides[0]; + const xRowStride = isChannelsLast ? dx.strides[1] : dx.strides[2]; + const xColStride = isChannelsLast ? dx.strides[2] : 1; + const xChannelStride = isChannelsLast ? 1 : dx.strides[1]; + const yBatchStride = dy.strides[0]; + const yRowStride = isChannelsLast ? dy.strides[1] : dy.strides[2]; + const yColStride = isChannelsLast ? dy.strides[2] : 1; + const yChannelStride = isChannelsLast ? 1 : dy.strides[1]; + for (let b = 0; b < batchSize; ++b) { + for (let d1 = 0; d1 < inChannels; ++d1) { + for (let xR = 0; xR < inHeight; ++xR) { + const xRCorner = xR - topPad; + const xRMin = Math.max(0, Math.ceil(xRCorner / strideHeight)); + const yRMax = Math.min(outHeight, (filterHeight + xRCorner) / strideHeight); + for (let xC = 0; xC < inWidth; ++xC) { + const xCCorner = xC - leftPad; + const xCMin = Math.max(0, Math.ceil(xCCorner / strideWidth)); + const yCMax = Math.min(outWidth, (filterWidth + xCCorner) / strideWidth); + let dotProd = 0; + for (let yR = xRMin; yR < yRMax; ++yR) { + const wR = yR * strideHeight - xRCorner; + for (let yC = xCMin; yC < yCMax; ++yC) { + const wC = yC * strideWidth - xCCorner; + const dyOffset = yBatchStride * b + yRowStride * yR + yColStride * yC; + const fltOffset = fltS0 * (filterHeight - 1 - wR) + + fltS1 * (filterWidth - 1 - wC) + fltS2 * d1; + for (let d2 = 0; d2 < outChannels; ++d2) { + const pixel = dyValues[dyOffset + yChannelStride * d2]; + const weight = fltValues[fltOffset + d2]; + dotProd += pixel * weight; + } + } + } + const dxOffset = xBatchStride * b + xRowStride * xR + + xColStride * xC + xChannelStride * d1; + dxValues[dxOffset] = dotProd; + } + } + } + } + return dx.toTensor(); + } + conv3dDerInput(dy, filter, convInfo) { + const dx = dist["buffer"](convInfo.inShape, 'float32'); + const dxValues = dx.values; + const [dxS0, dxS1, dxS2, dxS3] = dx.strides; + const dyValues = this.readSync(dy.dataId); + const [dyS0, dyS1, dyS2, dyS3] = dy.strides; + const fltValues = this.readSync(filter.dataId); + const [fltS0, fltS1, fltS2, fltS3] = filter.strides; + const { batchSize, filterDepth, filterHeight, filterWidth, inChannels, inDepth, inHeight, inWidth, outChannels, outDepth, outHeight, outWidth, strideDepth, strideHeight, strideWidth } = convInfo; + const frontPad = filterDepth - 1 - convInfo.padInfo.front; + const topPad = filterHeight - 1 - convInfo.padInfo.top; + const leftPad = filterWidth - 1 - convInfo.padInfo.left; + for (let b = 0; b < batchSize; ++b) { + for (let d1 = 0; d1 < inChannels; ++d1) { + // Frames of depth + for (let xF = 0; xF < inDepth; ++xF) { + const xFCorner = xF - frontPad; + const xFMin = Math.max(0, Math.ceil(xFCorner / strideDepth)); + const yFMax = Math.min(outDepth, (filterDepth + xFCorner) / strideDepth); + // Rows as per standard 2d matrix notation + for (let xR = 0; xR < inHeight; ++xR) { + const xRCorner = xR - topPad; + const xRMin = Math.max(0, Math.ceil(xRCorner / strideHeight)); + const yRMax = Math.min(outHeight, (filterHeight + xRCorner) / strideHeight); + // Columns as per standard 2d matrix notation + for (let xC = 0; xC < inWidth; ++xC) { + const xCCorner = xC - leftPad; + const xCMin = Math.max(0, Math.ceil(xCCorner / strideWidth)); + const yCMax = Math.min(outWidth, (filterWidth + xCCorner) / strideWidth); + let dotProd = 0; + for (let yF = xFMin; yF < yFMax; ++yF) { + const wF = yF * strideDepth - xFCorner; + for (let yR = xRMin; yR < yRMax; ++yR) { + const wR = yR * strideHeight - xRCorner; + for (let yC = xCMin; yC < yCMax; ++yC) { + const wC = yC * strideWidth - xCCorner; + const dyOffset = dyS0 * b + dyS1 * yF + dyS2 * yR + dyS3 * yC; + const fltOffset = fltS0 * (filterDepth - 1 - wF) + + fltS1 * (filterHeight - 1 - wR) + + fltS2 * (filterWidth - 1 - wC) + fltS3 * d1; + for (let d2 = 0; d2 < outChannels; ++d2) { + const pixel = dyValues[dyOffset + d2]; + const weight = fltValues[fltOffset + d2]; + dotProd += pixel * weight; + } + } + } + } + dxValues[dxS0 * b + dxS1 * xF + dxS2 * xR + dxS3 * xC + d1] = + dotProd; + } + } + } + } + } + return dx.toTensor(); + } + conv2dDerFilter(x, dy, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([x, dy], 'conv2dDerFilter'); + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const isChannelsLast = convInfo.dataFormat === 'channelsLast'; + const dW = dist["buffer"](convInfo.filterShape, 'float32'); + const leftPad = convInfo.padInfo.left; + const topPad = convInfo.padInfo.top; + const xBuf = this.bufferSync(x); + const dyBuf = this.bufferSync(dy); + for (let wR = 0; wR < filterHeight; ++wR) { + const yRMin = Math.max(0, Math.ceil((topPad - wR) / strideHeight)); + const yRMax = Math.min(convInfo.outHeight, (convInfo.inHeight + topPad - wR) / strideHeight); + for (let wC = 0; wC < filterWidth; ++wC) { + const yCMin = Math.max(0, Math.ceil((leftPad - wC) / strideWidth)); + const yCMax = Math.min(convInfo.outWidth, (convInfo.inWidth + leftPad - wC) / strideWidth); + for (let d1 = 0; d1 < convInfo.inChannels; ++d1) { + for (let d2 = 0; d2 < convInfo.outChannels; ++d2) { + // Need to convolve. + let dotProd = 0; + for (let b = 0; b < convInfo.batchSize; ++b) { + for (let yR = yRMin; yR < yRMax; ++yR) { + const xR = wR + yR * strideHeight - topPad; + for (let yC = yCMin; yC < yCMax; ++yC) { + const xC = wC + yC * strideWidth - leftPad; + if (isChannelsLast) { + dotProd += + xBuf.get(b, xR, xC, d1) * dyBuf.get(b, yR, yC, d2); + } + else { + dotProd += + xBuf.get(b, d1, xR, xC) * dyBuf.get(b, d2, yR, yC); + } + } + } + } + dW.set(dotProd, wR, wC, d1, d2); + } + } + } + } + return dW.toTensor(); + } + conv3dDerFilter(x, dy, convInfo) { + const strideDepth = convInfo.strideDepth; + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const filterDepth = convInfo.filterDepth; + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const dw = dist["buffer"](convInfo.filterShape, 'float32'); + const dwValues = dw.values; + const [dwS0, dwS1, dwS2, dwS3] = dw.strides; + const dyValues = this.readSync(dy.dataId); + const [dyS0, dyS1, dyS2, dyS3] = dy.strides; + const xValues = this.readSync(x.dataId); + const [xS0, xS1, xS2, xS3] = x.strides; + const frontPad = convInfo.padInfo.front; + const leftPad = convInfo.padInfo.left; + const topPad = convInfo.padInfo.top; + for (let wF = 0; wF < filterDepth; ++wF) { + const yFMin = Math.max(0, Math.ceil((frontPad - wF) / strideDepth)); + const yFMax = Math.min(convInfo.outDepth, (convInfo.inDepth + frontPad - wF) / strideDepth); + const wOffset1 = wF * dwS0; + for (let wR = 0; wR < filterHeight; ++wR) { + const yRMin = Math.max(0, Math.ceil((topPad - wR) / strideHeight)); + const yRMax = Math.min(convInfo.outHeight, (convInfo.inHeight + topPad - wR) / strideHeight); + const wOffset2 = wR * dwS1 + wOffset1; + for (let wC = 0; wC < filterWidth; ++wC) { + const yCMin = Math.max(0, Math.ceil((leftPad - wC) / strideWidth)); + const yCMax = Math.min(convInfo.outWidth, (convInfo.inWidth + leftPad - wC) / strideWidth); + const wOffset3 = wC * dwS2 + wOffset2; + for (let d1 = 0; d1 < convInfo.inChannels; ++d1) { + const wOffset4 = d1 * dwS3 + wOffset3; + for (let d2 = 0; d2 < convInfo.outChannels; ++d2) { + let dotProd = 0; + for (let b = 0; b < convInfo.batchSize; ++b) { + const xOffset1 = b * xS0; + const yOffset1 = b * dyS0; + for (let yF = yFMin; yF < yFMax; ++yF) { + const xF = wF + yF * strideDepth - frontPad; + const xOffset2 = xF * xS1 + xOffset1; + const yOffset2 = yF * dyS1 + yOffset1; + for (let yR = yRMin; yR < yRMax; ++yR) { + const xR = wR + yR * strideHeight - topPad; + const xOffset3 = xR * xS2 + xOffset2; + const yOffset3 = yR * dyS2 + yOffset2; + for (let yC = yCMin; yC < yCMax; ++yC) { + const xC = wC + yC * strideWidth - leftPad; + const xOffset4 = xC * xS3 + xOffset3; + const yOffset4 = yC * dyS3 + yOffset3; + dotProd += + xValues[xOffset4 + d1] * dyValues[yOffset4 + d2]; + } + } + } + } + dwValues[wOffset4 + d2] = dotProd; + } + } + } + } + } + return dw.toTensor(); + } + fusedDepthwiseConv2D({ input, filter, convInfo, bias, activation, preluActivationWeights }) { + let result = this.depthwiseConv2D(input, filter, convInfo); + if (bias) { + result = this.add(result, bias); + } + if (activation) { + result = + mapActivation(this, result, activation, preluActivationWeights); + } + return result; + } + depthwiseConv2D(x, filter, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([x, filter], 'depthwiseConv2D'); + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const padLeft = convInfo.padInfo.left; + const padTop = convInfo.padInfo.top; + const chMul = convInfo.outChannels / convInfo.inChannels; + const y = dist["buffer"](convInfo.outShape, x.dtype); + const xVals = this.readSync(x.dataId); + const wVals = this.readSync(filter.dataId); + const yVals = y.values; + for (let b = 0; b < convInfo.batchSize; ++b) { + const xOffset1 = b * x.strides[0]; + const yOffset1 = b * y.strides[0]; + for (let yR = 0; yR < convInfo.outHeight; ++yR) { + const yOffset2 = yOffset1 + yR * y.strides[1]; + const xRCorner = yR * convInfo.strideHeight - padLeft; + for (let wR = 0; wR < filterHeight; ++wR) { + const xR = xRCorner + wR * dilationHeight; + if (xR < 0 || xR >= convInfo.inHeight) { + continue; + } + const wOffset1 = wR * filter.strides[0]; + const xOffset2 = xOffset1 + xR * x.strides[1]; + for (let yC = 0; yC < convInfo.outWidth; ++yC) { + const yOffset3 = yOffset2 + yC * y.strides[2]; + const xCCorner = yC * convInfo.strideWidth - padTop; + for (let wC = 0; wC < filterWidth; ++wC) { + const xC = xCCorner + wC * dilationWidth; + if (xC < 0 || xC >= convInfo.inWidth) { + continue; + } + const wOffset2 = wOffset1 + wC * filter.strides[1]; + const xOffset3 = xOffset2 + xC * convInfo.inChannels; + let yOffset4 = yOffset3; + let wOffset3 = wOffset2; + for (let d1 = 0; d1 < convInfo.inChannels; ++d1) { + const xVal = xVals[xOffset3 + d1]; + for (let q = 0; q < chMul; ++q) { + yVals[yOffset4 + q] += xVal * wVals[wOffset3 + q]; + } + yOffset4 += chMul; + wOffset3 += chMul; + } + } + } + } + } + } + return y.toTensor(); + } + depthwiseConv2DDerInput(dy, filter, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([dy, filter], 'depthwiseConv2DDerInput'); + const dx = dist["buffer"](convInfo.inShape, 'float32'); + const dxValues = dx.values; + const [dxS0, dxS1, dxS2] = dx.strides; + const dyValues = this.readSync(dy.dataId); + const [dyS0, dyS1, dyS2] = dy.strides; + const fltValues = this.readSync(filter.dataId); + const [fltS0, fltS1, fltS2] = filter.strides; + const { batchSize, filterHeight, filterWidth, inChannels, inHeight, inWidth, outChannels, outHeight, outWidth, strideHeight, strideWidth } = convInfo; + const topPad = filterHeight - 1 - convInfo.padInfo.top; + const leftPad = filterWidth - 1 - convInfo.padInfo.left; + const chMul = outChannels / inChannels; + for (let b = 0; b < batchSize; ++b) { + for (let d1 = 0; d1 < inChannels; ++d1) { + for (let xR = 0; xR < inHeight; ++xR) { + const xRCorner = xR - topPad; + const xRMin = Math.max(0, Math.ceil(xRCorner / strideHeight)); + const yRMax = Math.min(outHeight, (filterHeight + xRCorner) / strideHeight); + for (let xC = 0; xC < inWidth; ++xC) { + const xCCorner = xC - leftPad; + const xCMin = Math.max(0, Math.ceil(xCCorner / strideWidth)); + const yCMax = Math.min(outWidth, (filterWidth + xCCorner) / strideWidth); + let dotProd = 0; + for (let yR = xRMin; yR < yRMax; ++yR) { + const wR = yR * strideHeight - xRCorner; + for (let yC = xCMin; yC < yCMax; ++yC) { + const wC = yC * strideWidth - xCCorner; + const dyOffset = dyS0 * b + dyS1 * yR + dyS2 * yC; + const fltOffset = fltS0 * (filterHeight - 1 - wR) + + fltS1 * (filterWidth - 1 - wC) + fltS2 * d1; + for (let dm = 0; dm < chMul; ++dm) { + const d2 = d1 * chMul + dm; + const pixel = dyValues[dyOffset + d2]; + const weight = fltValues[fltOffset + dm]; + dotProd += pixel * weight; + } + } + } + dxValues[dxS0 * b + dxS1 * xR + dxS2 * xC + d1] = dotProd; + } + } + } + } + return dx.toTensor(); + } + depthwiseConv2DDerFilter(x, dy, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([x, dy], 'depthwiseConv2DDerFilter'); + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const dW = dist["buffer"](convInfo.filterShape, 'float32'); + const leftPad = convInfo.padInfo.left; + const topPad = convInfo.padInfo.top; + const chMul = convInfo.outChannels / convInfo.inChannels; + const xBuf = this.bufferSync(x); + const dyBuf = this.bufferSync(dy); + for (let wR = 0; wR < filterHeight; ++wR) { + const yRMin = Math.max(0, Math.ceil((topPad - wR) / strideHeight)); + const yRMax = Math.min(convInfo.outHeight, (convInfo.inHeight + topPad - wR) / strideHeight); + for (let wC = 0; wC < filterWidth; ++wC) { + const yCMin = Math.max(0, Math.ceil((leftPad - wC) / strideWidth)); + const yCMax = Math.min(convInfo.outWidth, (convInfo.inWidth + leftPad - wC) / strideWidth); + for (let d2 = 0; d2 < convInfo.outChannels; ++d2) { + const d1 = Math.trunc(d2 / chMul); + const dm = d2 % chMul; + let dotProd = 0; + for (let b = 0; b < convInfo.batchSize; ++b) { + for (let yR = yRMin; yR < yRMax; ++yR) { + const xR = wR + yR * strideHeight - topPad; + for (let yC = yCMin; yC < yCMax; ++yC) { + const xC = wC + yC * strideWidth - leftPad; + dotProd += xBuf.get(b, xR, xC, d1) * dyBuf.get(b, yR, yC, d2); + } + } + } + dW.set(dotProd, wR, wC, d1, dm); + } + } + } + return dW.toTensor(); + } + tile(x, reps) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'tile'); + return tile(this.bufferSync(x), reps); + } + pad(x, paddings, constantValue) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'pad'); + const outShape = paddings.map((p, i) => p[0] /* beforePad */ + x.shape[i] + p[1] /* afterPad */); + const start = paddings.map(p => p[0]); + const xBuffer = this.bufferSync(x); + const buffer = dist["buffer"](outShape, x.dtype); + if (constantValue !== 0) { + buffer.values.fill(constantValue); + } + for (let i = 0; i < x.size; i++) { + const coords = xBuffer.indexToLoc(i); + const outCoords = coords.map((c, i) => c + start[i]); + buffer.set(xBuffer.get(...coords), ...outCoords); + } + return buffer.toTensor(); + } + gather(x, indices, axis) { + Object(cpu_util["a" /* assertNotComplex */])([x, indices], 'gather'); + const newShape = x.shape.slice(); + const indicesValues = this.readSync(indices.dataId); + newShape[axis] = indicesValues.length; + const result = dist["buffer"](newShape, x.dtype); + const xBuf = this.bufferSync(x); + for (let i = 0; i < result.size; ++i) { + const newLoc = result.indexToLoc(i); + const originalLoc = newLoc.slice(); + originalLoc[axis] = indicesValues[newLoc[axis]]; + const originalIndex = xBuf.locToIndex(originalLoc); + result.values[i] = xBuf.values[originalIndex]; + } + return result.toTensor(); + } + batchToSpaceND(x, blockShape, crops) { + Object(cpu_util["a" /* assertNotComplex */])([x], 'batchToSpaceND'); + const prod = blockShape.reduce((a, b) => a * b); + const reshaped = dist["backend_util"].getReshaped(x.shape, blockShape, prod); + const permuted = dist["backend_util"].getPermuted(reshaped.length, blockShape.length); + const reshapedPermuted = dist["backend_util"].getReshapedPermuted(x.shape, blockShape, prod); + const sliceBeginCoords = dist["backend_util"].getSliceBeginCoords(crops, blockShape.length); + const sliceSize = dist["backend_util"].getSliceSize(reshapedPermuted, crops, blockShape.length); + return dist["transpose"](x.reshape(reshaped), permuted) + .reshape(reshapedPermuted) + .slice(sliceBeginCoords, sliceSize); + } + spaceToBatchND(x, blockShape, paddings) { + Object(cpu_util["a" /* assertNotComplex */])([x], 'spaceToBatchND'); + const prod = blockShape.reduce((a, b) => a * b); + const completePaddings = [[0, 0]]; + completePaddings.push(...paddings); + for (let i = 1 + blockShape.length; i < x.shape.length; ++i) { + completePaddings.push([0, 0]); + } + const paddedX = x.pad(completePaddings); + const reshapedPaddedShape = dist["backend_util"].getReshaped(paddedX.shape, blockShape, prod, false); + const permutedReshapedPaddedPermutation = dist["backend_util"].getPermuted(reshapedPaddedShape.length, blockShape.length, false); + const flattenShape = dist["backend_util"].getReshapedPermuted(paddedX.shape, blockShape, prod, false); + return dist["transpose"](paddedX.reshape(reshapedPaddedShape), permutedReshapedPaddedPermutation) + .reshape(flattenShape); + } + maxPool(x, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'maxPool'); + const xValues = this.readSync(x.dataId); + return Object(pool_utils["b" /* pool */])(xValues, x.shape, x.dtype, x.strides, convInfo, 'max') + .toTensor(); + } + maxPoolBackprop(dy, x, y, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([x, y], 'maxPoolBackprop'); + const xValues = this.readSync(x.dataId); + const maxPosBuf = Object(dist["buffer"])(convInfo.outShape, x.dtype, Object(pool_utils["a" /* maxPoolPositions */])(xValues, x.shape, x.dtype, convInfo).values); + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padLeft = effectiveFilterWidth - 1 - convInfo.padInfo.left; + const padTop = effectiveFilterHeight - 1 - convInfo.padInfo.top; + const dx = dist["buffer"](x.shape, 'float32'); + const dyBuf = this.bufferSync(dy); + for (let b = 0; b < convInfo.batchSize; ++b) { + for (let d = 0; d < convInfo.inChannels; ++d) { + for (let dxR = 0; dxR < convInfo.inHeight; ++dxR) { + for (let dxC = 0; dxC < convInfo.inWidth; ++dxC) { + // Shader code begins. + const dyRCorner = dxR - padTop; + const dyCCorner = dxC - padLeft; + let dotProd = 0; + for (let wR = 0; wR < effectiveFilterHeight; wR += dilationHeight) { + const dyR = (dyRCorner + wR) / strideHeight; + if (dyR < 0 || dyR >= convInfo.outHeight || + Math.floor(dyR) !== dyR) { + continue; + } + for (let wC = 0; wC < effectiveFilterWidth; wC += dilationWidth) { + const dyC = (dyCCorner + wC) / strideWidth; + if (dyC < 0 || dyC >= convInfo.outWidth || + Math.floor(dyC) !== dyC) { + continue; + } + const maxPos = effectiveFilterHeight * effectiveFilterWidth - + 1 - maxPosBuf.get(b, dyR, dyC, d); + const curPos = wR * effectiveFilterWidth + wC; + const mask = maxPos === curPos ? 1 : 0; + if (mask === 0) { + continue; + } + const pixel = dyBuf.get(b, dyR, dyC, d); + dotProd += pixel * mask; + } + } + dx.set(dotProd, b, dxR, dxC, d); + } + } + } + } + return dx.toTensor(); + } + avgPoolBackprop(dy, x, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([dy, x], 'avgPoolBackprop'); + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padLeft = effectiveFilterWidth - 1 - convInfo.padInfo.left; + const padTop = effectiveFilterHeight - 1 - convInfo.padInfo.top; + const dx = dist["buffer"](x.shape, 'float32'); + const avgMultiplier = 1 / (filterHeight * filterWidth); + const dyBuf = this.bufferSync(dy); + for (let b = 0; b < convInfo.batchSize; ++b) { + for (let d = 0; d < convInfo.inChannels; ++d) { + for (let dxR = 0; dxR < convInfo.inHeight; ++dxR) { + for (let dxC = 0; dxC < convInfo.inWidth; ++dxC) { + // Shader code begins. + const dyRCorner = dxR - padTop; + const dyCCorner = dxC - padLeft; + let dotProd = 0; + for (let wR = 0; wR < effectiveFilterHeight; wR += dilationHeight) { + const dyR = (dyRCorner + wR) / strideHeight; + if (dyR < 0 || dyR >= convInfo.outHeight || + Math.floor(dyR) !== dyR) { + continue; + } + for (let wC = 0; wC < effectiveFilterWidth; wC += dilationWidth) { + const dyC = (dyCCorner + wC) / strideWidth; + if (dyC < 0 || dyC >= convInfo.outWidth || + Math.floor(dyC) !== dyC) { + continue; + } + const pixel = dyBuf.get(b, dyR, dyC, d); + dotProd += pixel; + } + } + dx.set(dotProd * avgMultiplier, b, dxR, dxC, d); + } + } + } + } + return dx.toTensor(); + } + pool3d(x, convInfo, poolType) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'pool3d'); + const strideDepth = convInfo.strideDepth; + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const dilationDepth = convInfo.dilationDepth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterDepth = convInfo.effectiveFilterDepth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padFront = convInfo.padInfo.front; + const padTop = convInfo.padInfo.top; + const padLeft = convInfo.padInfo.left; + const initialValue = (poolType === 'max' ? Number.NEGATIVE_INFINITY : + Number.POSITIVE_INFINITY); + const xValues = this.readSync(x.dataId); + const output = dist["buffer"](convInfo.outShape, x.dtype); + const outputVals = output.values; + const outputBatchStrides = convInfo.outShape[1] * convInfo.outShape[2] * + convInfo.outShape[3] * convInfo.outShape[4]; + const outputDepthStrides = convInfo.outShape[2] * convInfo.outShape[3] * convInfo.outShape[4]; + const outputRowStrides = convInfo.outShape[3] * convInfo.outShape[4]; + const outputColStrides = convInfo.outShape[4]; + for (let batch = 0; batch < convInfo.batchSize; ++batch) { + const outputBatchOffset = batch * outputBatchStrides; + const inputBatchOffset = batch * x.strides[0]; + for (let channel = 0; channel < convInfo.inChannels; ++channel) { + for (let yDepth = 0; yDepth < convInfo.outDepth; ++yDepth) { + const xDepthCorner = yDepth * strideDepth - padFront; + let xDepthMin = xDepthCorner; + while (xDepthMin < 0) { + xDepthMin += dilationDepth; + } + const xDepthMax = Math.min(convInfo.inDepth, effectiveFilterDepth + xDepthCorner); + const outputDepthOffset = outputBatchOffset + yDepth * outputDepthStrides; + for (let yRow = 0; yRow < convInfo.outHeight; ++yRow) { + const xRowCorner = yRow * strideHeight - padTop; + let xRowMin = xRowCorner; + while (xRowMin < 0) { + xRowMin += dilationHeight; + } + const xRowMax = Math.min(convInfo.inHeight, effectiveFilterHeight + xRowCorner); + const outputRowOffset = outputDepthOffset + yRow * outputRowStrides; + for (let yCol = 0; yCol < convInfo.outWidth; ++yCol) { + const xColCorner = yCol * strideWidth - padLeft; + let xColMin = xColCorner; + while (xColMin < 0) { + xColMin += dilationWidth; + } + const xColMax = Math.min(convInfo.inWidth, effectiveFilterWidth + xColCorner); + // Shader code begins + const outputColOffset = outputRowOffset + yCol * outputColStrides; + let minMaxValue = initialValue; + let avgValue = 0; + let count = 0; + for (let xDepth = xDepthMin; xDepth < xDepthMax; xDepth += dilationDepth) { + const xDepthOffset = inputBatchOffset + xDepth * x.strides[1]; + for (let xRow = xRowMin; xRow < xRowMax; xRow += dilationHeight) { + const xRowOffset = xDepthOffset + xRow * x.strides[2]; + for (let xCol = xColMin; xCol < xColMax; xCol += dilationWidth) { + const xColOffset = xRowOffset + xCol * x.strides[3]; + const pixel = xValues[xColOffset + channel]; + if ((poolType === 'max' && pixel > minMaxValue)) { + minMaxValue = pixel; + } + else if (poolType === 'avg') { + avgValue += pixel; + count++; + } + if (isNaN(minMaxValue)) { + break; + } + } + if (isNaN(minMaxValue)) { + break; + } + } + if (isNaN(minMaxValue)) { + break; + } + } + const outputOffset = outputColOffset + channel; + outputVals[outputOffset] = + poolType === 'avg' ? avgValue / count : minMaxValue; + } + } + } + } + } + return output.toTensor(); + } + avgPool3d(x, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'avgPool3d'); + return this.pool3d(x, convInfo, 'avg').toFloat(); + } + avgPool3dBackprop(dy, x, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([dy, x], 'avgPool3dBackprop'); + const strideDepth = convInfo.strideDepth; + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const filterDepth = convInfo.filterDepth; + const filterHeight = convInfo.filterHeight; + const filterWidth = convInfo.filterWidth; + const dilationDepth = convInfo.dilationDepth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterDepth = convInfo.effectiveFilterDepth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padFront = effectiveFilterDepth - 1 - convInfo.padInfo.front; + const padLeft = effectiveFilterWidth - 1 - convInfo.padInfo.left; + const padTop = effectiveFilterHeight - 1 - convInfo.padInfo.top; + const dx = dist["buffer"](x.shape, 'float32'); + const avgMultiplier = 1 / (filterDepth * filterHeight * filterWidth); + const dyBuf = this.bufferSync(dy); + for (let batch = 0; batch < convInfo.batchSize; ++batch) { + for (let channel = 0; channel < convInfo.inChannels; ++channel) { + for (let dxDepth = 0; dxDepth < convInfo.inDepth; ++dxDepth) { + for (let dxRow = 0; dxRow < convInfo.inHeight; ++dxRow) { + for (let dxCol = 0; dxCol < convInfo.inWidth; ++dxCol) { + // Shader code begins. + const dyDepthCorner = dxDepth - padFront; + const dyRowCorner = dxRow - padTop; + const dyColCorner = dxCol - padLeft; + let dotProd = 0; + for (let wDepth = 0; wDepth < effectiveFilterDepth; wDepth += dilationDepth) { + const dyDepth = (dyDepthCorner + wDepth) / strideDepth; + if (dyDepth < 0 || dyDepth >= convInfo.outDepth || + Math.floor(dyDepth) !== dyDepth) { + continue; + } + for (let wRow = 0; wRow < effectiveFilterHeight; wRow += dilationHeight) { + const dyRow = (dyRowCorner + wRow) / strideHeight; + if (dyRow < 0 || dyRow >= convInfo.outHeight || + Math.floor(dyRow) !== dyRow) { + continue; + } + for (let wCol = 0; wCol < effectiveFilterWidth; wCol += dilationWidth) { + const dyCol = (dyColCorner + wCol) / strideWidth; + if (dyCol < 0 || dyCol >= convInfo.outWidth || + Math.floor(dyCol) !== dyCol) { + continue; + } + const pixel = dyBuf.get(batch, dyDepth, dyRow, dyCol, channel); + dotProd += pixel; + } + } + } + dx.set(dotProd * avgMultiplier, batch, dxDepth, dxRow, dxCol, channel); + } + } + } + } + } + return dx.toTensor(); + } + maxPool3d(x, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'maxPool3d'); + return this.pool3d(x, convInfo, 'max').toFloat(); + } + maxPool3dPositions(x, convInfo) { + const maxPositions = dist["buffer"](convInfo.outShape, 'int32'); + const strideDepth = convInfo.strideDepth; + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const dilationDepth = convInfo.dilationDepth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterDepth = convInfo.effectiveFilterDepth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padFront = convInfo.padInfo.front; + const padTop = convInfo.padInfo.top; + const padLeft = convInfo.padInfo.left; + const xBuf = this.bufferSync(x); + for (let batch = 0; batch < convInfo.batchSize; ++batch) { + for (let channel = 0; channel < convInfo.inChannels; ++channel) { + for (let yDepth = 0; yDepth < convInfo.outDepth; ++yDepth) { + const xDepthCorner = yDepth * strideDepth - padFront; + let xDepthMin = xDepthCorner; + while (xDepthMin < 0) { + xDepthMin += dilationDepth; + } + const xDepthMax = Math.min(convInfo.inDepth, effectiveFilterDepth + xDepthCorner); + for (let yRow = 0; yRow < convInfo.outHeight; ++yRow) { + const xRowCorner = yRow * strideHeight - padTop; + let xRowMin = xRowCorner; + while (xRowMin < 0) { + xRowMin += dilationHeight; + } + const xRowMax = Math.min(convInfo.inHeight, effectiveFilterHeight + xRowCorner); + for (let yCol = 0; yCol < convInfo.outWidth; ++yCol) { + const xColCorner = yCol * strideWidth - padLeft; + let xColMin = xColCorner; + while (xColMin < 0) { + xColMin += dilationWidth; + } + const xColMax = Math.min(convInfo.inWidth, effectiveFilterWidth + xColCorner); + // Shader code begins + let maxValue = Number.NEGATIVE_INFINITY; + let maxPosition = -1; + for (let xDepth = xDepthMin; xDepth < xDepthMax; xDepth += dilationDepth) { + const wDepth = xDepth - xDepthCorner; + for (let xRow = xRowMin; xRow < xRowMax; xRow += dilationHeight) { + const wRow = xRow - xRowCorner; + for (let xCol = xColMin; xCol < xColMax; xCol += dilationWidth) { + const wCol = xCol - xColCorner; + const pixel = xBuf.get(batch, xDepth, xRow, xCol, channel); + if (pixel >= maxValue) { + maxValue = pixel; + maxPosition = wDepth * effectiveFilterHeight * + effectiveFilterWidth + + wRow * effectiveFilterHeight + wCol; + } + } + } + } + maxPositions.set(maxPosition, batch, yDepth, yRow, yCol, channel); + } + } + } + } + } + return maxPositions.toTensor(); + } + maxPool3dBackprop(dy, x, y, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])([x, y], 'maxPool3dBackprop'); + const maxPositions = this.maxPool3dPositions(x, convInfo); + const strideDepth = convInfo.strideDepth; + const strideHeight = convInfo.strideHeight; + const strideWidth = convInfo.strideWidth; + const dilationDepth = convInfo.dilationDepth; + const dilationHeight = convInfo.dilationHeight; + const dilationWidth = convInfo.dilationWidth; + const effectiveFilterDepth = convInfo.effectiveFilterDepth; + const effectiveFilterHeight = convInfo.effectiveFilterHeight; + const effectiveFilterWidth = convInfo.effectiveFilterWidth; + const padFront = effectiveFilterDepth - 1 - convInfo.padInfo.front; + const padLeft = effectiveFilterWidth - 1 - convInfo.padInfo.left; + const padTop = effectiveFilterHeight - 1 - convInfo.padInfo.top; + const dx = dist["buffer"](x.shape, 'float32'); + const maxPosBuf = this.bufferSync(maxPositions); + const dyBuf = this.bufferSync(dy); + for (let batch = 0; batch < convInfo.batchSize; ++batch) { + for (let channel = 0; channel < convInfo.inChannels; ++channel) { + for (let dxDepth = 0; dxDepth < convInfo.inDepth; ++dxDepth) { + for (let dxRow = 0; dxRow < convInfo.inHeight; ++dxRow) { + for (let dxCol = 0; dxCol < convInfo.inWidth; ++dxCol) { + // Shader code begins + const dyDepthCorner = dxDepth - padFront; + const dyRowCorner = dxRow - padTop; + const dyColCorner = dxCol - padLeft; + let dotProd = 0; + for (let wDepth = 0; wDepth < effectiveFilterDepth; wDepth += dilationDepth) { + const dyDepth = (dyDepthCorner + wDepth) / strideDepth; + if (dyDepth < 0 || dyDepth >= convInfo.outDepth || + Math.floor(dyDepth) !== dyDepth) { + continue; + } + for (let wRow = 0; wRow < effectiveFilterHeight; wRow += dilationHeight) { + const dyRow = (dyRowCorner + wRow) / strideHeight; + if (dyRow < 0 || dyRow >= convInfo.outHeight || + Math.floor(dyRow) !== dyRow) { + continue; + } + for (let wCol = 0; wCol < effectiveFilterWidth; wCol += dilationWidth) { + const dyCol = (dyColCorner + wCol) / strideWidth; + if (dyCol < 0 || dyCol >= convInfo.outWidth || + Math.floor(dyCol) !== dyCol) { + continue; + } + const maxPos = effectiveFilterDepth * + effectiveFilterHeight * effectiveFilterWidth - + 1 - + maxPosBuf.get(batch, dyDepth, dyRow, dyCol, channel); + const curPos = wDepth * effectiveFilterHeight * effectiveFilterWidth + + wRow * effectiveFilterWidth + wCol; + const mask = maxPos === curPos ? 1 : 0; + if (mask === 0) { + continue; + } + const pixel = dyBuf.get(batch, dyDepth, dyRow, dyCol, channel); + dotProd += pixel * mask; + } + } + } + dx.set(dotProd, batch, dxDepth, dxRow, dxCol, channel); + } + } + } + } + } + return dx.toTensor(); + } + cast(x, dtype) { + return dist["backend_util"].castTensor(x, dtype, this); + } + reshape(x, shape) { + return dist["backend_util"].reshapeTensor(x, shape); + } + avgPool(x, convInfo) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'avgPool'); + Object(cpu_util["a" /* assertNotComplex */])(x, 'maxPool'); + const xValues = this.readSync(x.dataId); + return Object(pool_utils["b" /* pool */])(xValues, x.shape, x.dtype, x.strides, convInfo, 'avg') + .toTensor() + .toFloat(); + } + resizeBilinear(x, newHeight, newWidth, alignCorners) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'resizeBilinear'); + const [batch, oldHeight, oldWidth, numChannels] = x.shape; + const xValues = this.readSync(x.dataId); + const result = new Float32Array(dist["util"].sizeFromShape([batch, newHeight, newWidth, numChannels])); + const effectiveInputSize = [ + (alignCorners && newHeight > 1) ? oldHeight - 1 : oldHeight, + (alignCorners && newWidth > 1) ? oldWidth - 1 : oldWidth + ]; + const effectiveOutputSize = [ + (alignCorners && newHeight > 1) ? newHeight - 1 : newHeight, + (alignCorners && newWidth > 1) ? newWidth - 1 : newWidth + ]; + let outputIdx = 0; + const effectiveRowSizeRatio = effectiveInputSize[0] / effectiveOutputSize[0]; + const effectiveColSizeRatio = effectiveInputSize[1] / effectiveOutputSize[1]; + for (let b = 0; b < batch; b++) { + for (let r = 0; r < newHeight; r++) { + const sourceFracRow = effectiveRowSizeRatio * r; + const sourceRowFloor = Math.floor(sourceFracRow); + const rowFrac = sourceFracRow - sourceRowFloor; + const sourceRowCeil = Math.min(oldHeight - 1, Math.ceil(sourceFracRow)); + const topRowOffset = b * x.strides[0] + sourceRowFloor * x.strides[1]; + const botRowOffset = b * x.strides[0] + sourceRowCeil * x.strides[1]; + for (let c = 0; c < newWidth; c++) { + const sourceFracCol = effectiveColSizeRatio * c; + const sourceColFloor = Math.floor(sourceFracCol); + const colFrac = sourceFracCol - sourceColFloor; + const sourceColCeil = Math.min(oldWidth - 1, Math.ceil(sourceFracCol)); + const topLeftOffest = topRowOffset + sourceColFloor * x.strides[2]; + const botLeftOffset = botRowOffset + sourceColFloor * x.strides[2]; + const topRightOffset = topRowOffset + sourceColCeil * x.strides[2]; + const botRightOffest = botRowOffset + sourceColCeil * x.strides[2]; + for (let d = 0; d < numChannels; d++) { + // Begin shader. + // Compute the fractional index of the source. + const topLeft = xValues[topLeftOffest + d]; + const bottomLeft = xValues[botLeftOffset + d]; + const topRight = xValues[topRightOffset + d]; + const bottomRight = xValues[botRightOffest + d]; + const top = topLeft + (topRight - topLeft) * colFrac; + const bottom = bottomLeft + (bottomRight - bottomLeft) * colFrac; + const newValue = top + (bottom - top) * rowFrac; + result[outputIdx++] = newValue; + } + } + } + } + return dist["tensor"](result, [batch, newHeight, newWidth, numChannels]); + } + resizeBilinearBackprop(dy, x, alignCorners) { + Object(cpu_util["a" /* assertNotComplex */])([dy, x], 'resizeBilinearBackprop'); + const [batch, xHeight, xWidth, depth] = x.shape; + const [, yHeight, yWidth] = dy.shape; + const output = new Float32Array(batch * xHeight * xWidth * depth); + // In the backwards pass, we want to find the pixels that were generated + // for each pixel in the input image the forward pass and add the + // corresponding coefficient from dy to the gradient (with some + // interpolation). + const effectiveXSize = [ + (alignCorners && yHeight > 1) ? xHeight - 1 : xHeight, + (alignCorners && yWidth > 1) ? xWidth - 1 : xWidth + ]; + const effectiveYSize = [ + (alignCorners && yHeight > 1) ? yHeight - 1 : yHeight, + (alignCorners && yWidth > 1) ? yWidth - 1 : yWidth + ]; + const heightScale = effectiveXSize[0] / effectiveYSize[0]; + const widthScale = effectiveXSize[1] / effectiveYSize[1]; + // Reference implementation + // tslint:disable-next-line:max-line-length + // https://github.com/tensorflow/tensorflow/blob/3039375c86a5bbc9610c7725dcaa95d635f87ba2/tensorflow/core/kernels/resize_bilinear_op.cc#L275 + const dyValues = this.readSync(dy.dataId); + let offset = 0; + for (let b = 0; b < batch; b++) { + const bOffset = b * x.strides[0]; + for (let r = 0; r < yHeight; r++) { + const dxR = r * heightScale; + const topDxRIndex = Math.floor(dxR); + const bottomDxRIndex = Math.min(Math.ceil(dxR), xHeight - 1); + const topDxROffset = bOffset + topDxRIndex * x.strides[1]; + const bottomDxROffset = bOffset + bottomDxRIndex * x.strides[1]; + const dxRLerp = dxR - topDxRIndex; + const inverseDxRLerp = 1.0 - dxRLerp; + for (let c = 0; c < yWidth; c++) { + const dxC = c * widthScale; + const leftDxCIndex = Math.floor(dxC); + const rightDxCIndex = Math.min(Math.ceil(dxC), xWidth - 1); + const dxCLerp = dxC - leftDxCIndex; + const inverseDxCLerp = 1.0 - dxCLerp; + const topLeftRCOffset = topDxROffset + leftDxCIndex * x.strides[2]; + const topRightRCOffset = topDxROffset + rightDxCIndex * x.strides[2]; + const bottomLeftRCOffset = bottomDxROffset + leftDxCIndex * x.strides[2]; + const bottomRightRCOffset = bottomDxROffset + rightDxCIndex * x.strides[2]; + const inverseDxRLerpTimesInverseDxCLerp = inverseDxRLerp * inverseDxCLerp; + const inverseDxRLerpTimesDxCLerp = inverseDxRLerp * dxCLerp; + const dxRLerpTimesInverseDxCLerp = dxRLerp * inverseDxCLerp; + const dxRLerpTimesDxCLerp = dxRLerp * dxCLerp; + for (let d = 0; d < depth; d++) { + const dyVal = dyValues[offset++]; + output[topLeftRCOffset + d] += + dyVal * inverseDxRLerpTimesInverseDxCLerp; + output[topRightRCOffset + d] += dyVal * inverseDxRLerpTimesDxCLerp; + output[bottomLeftRCOffset + d] += + dyVal * dxRLerpTimesInverseDxCLerp; + output[bottomRightRCOffset + d] += dyVal * dxRLerpTimesDxCLerp; + } + } + } + } + return dist["tensor4d"](output, [batch, xWidth, xHeight, depth], x.dtype); + } + resizeNearestNeighbor(x, newHeight, newWidth, alignCorners) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'resizeNearestNeighbor'); + const [batch, oldHeight, oldWidth, numChannels] = x.shape; + const xValues = this.readSync(x.dataId); + const output = new Float32Array(batch * newHeight * newWidth * numChannels); + const effectiveInputSize = [ + (alignCorners && newHeight > 1) ? oldHeight - 1 : oldHeight, + (alignCorners && newWidth > 1) ? oldWidth - 1 : oldWidth + ]; + const effectiveOutputSize = [ + (alignCorners && newHeight > 1) ? newHeight - 1 : newHeight, + (alignCorners && newWidth > 1) ? newWidth - 1 : newWidth + ]; + const effectiveRowSizeRatio = effectiveInputSize[0] / effectiveOutputSize[0]; + const effectiveColSizeRatio = effectiveInputSize[1] / effectiveOutputSize[1]; + let outputOffset = 0; + for (let b = 0; b < batch; b++) { + const batchOffset = b * x.strides[0]; + for (let r = 0; r < newHeight; r++) { + const sourceFracRow = effectiveRowSizeRatio * r; + const sourceNearestRow = Math.min(oldHeight - 1, alignCorners ? Math.round(sourceFracRow) : + Math.floor(sourceFracRow)); + const rowOffset = batchOffset + sourceNearestRow * x.strides[1]; + for (let c = 0; c < newWidth; c++) { + const sourceFracCol = effectiveColSizeRatio * c; + const sourceNearestCol = Math.min(oldWidth - 1, alignCorners ? Math.round(sourceFracCol) : + Math.floor(sourceFracCol)); + const colOffset = rowOffset + sourceNearestCol * x.strides[2]; + for (let d = 0; d < numChannels; d++) { + // Begin shader. + // Compute the fractional index of the source. + const newVal = xValues[colOffset + d]; + output[outputOffset++] = newVal; + } + } + } + } + return dist["tensor"](output, [batch, newHeight, newWidth, numChannels], x.dtype); + } + resizeNearestNeighborBackprop(dy, x, alignCorners) { + Object(cpu_util["a" /* assertNotComplex */])([dy, x], 'resizeNearestNeighborBackprop'); + const [batch, xHeight, xWidth, depth] = x.shape; + const [, yHeight, yWidth] = dy.shape; + const output = new Float32Array(batch * xHeight * xWidth * depth); + const dyValues = this.readSync(dy.dataId); + // In the backwards pass, we want to find the pixels that were generated + // for each pixel in the input image the forward pass + const effectiveXSize = [ + (alignCorners && yHeight > 1) ? xHeight - 1 : xHeight, + (alignCorners && yWidth > 1) ? xWidth - 1 : xWidth + ]; + const effectiveYSize = [ + (alignCorners && yHeight > 1) ? yHeight - 1 : yHeight, + (alignCorners && yWidth > 1) ? yWidth - 1 : yWidth + ]; + const heightScale = effectiveXSize[0] / effectiveYSize[0]; + const widthScale = effectiveXSize[1] / effectiveYSize[1]; + const invHeightScale = 1 / heightScale; + const invWidthScale = 1 / widthScale; + // This defines the size of the window of values around a particular + // index in dy that we want to search for contributions to dx. + const winHeight = (Math.ceil(invHeightScale) * 2) + 2; + const winWidth = (Math.ceil(invWidthScale) * 2) + 2; + // Loop over the output space. + for (let b = 0; b < batch; b++) { + const batchOffset = b * x.strides[0]; + for (let r = 0; r < xHeight; r++) { + const rowOffset = batchOffset + r * x.strides[1]; + // Compute bounds for where in dy we will look + const startRLerp = Math.floor(r * invHeightScale); + const startDyR = Math.floor(startRLerp - (winHeight / 2)); + for (let c = 0; c < xWidth; c++) { + const colOffset = rowOffset + c * x.strides[2]; + // Compute bounds for where in dy we will look + const startCLerp = Math.floor(c * invWidthScale); + const startDyC = Math.floor(startCLerp - (winWidth / 2)); + for (let d = 0; d < depth; d++) { + let accum = 0; + // loop over dy + for (let dyRIndex = 0; dyRIndex < winHeight; dyRIndex++) { + const dyR = dyRIndex + startDyR; + // Guard against the window exceeding the bounds of dy + if (dyR < 0 || dyR >= yHeight) { + continue; + } + const dyROffset = batchOffset + dyR * dy.strides[1]; + const sourceFracRow = dyR * heightScale; + const sourceNearestRow = Math.min(xHeight - 1, alignCorners ? Math.round(sourceFracRow) : + Math.floor(sourceFracRow)); + if (r !== sourceNearestRow) { + continue; + } + for (let dyCIndex = 0; dyCIndex < winWidth; dyCIndex++) { + const dyC = dyCIndex + startDyC; + // Guard against the window exceeding the bounds of dy + if (dyC < 0 || dyC >= yWidth) { + continue; + } + const dyCOffset = dyROffset + dyC * dy.strides[2]; + const sourceFracCol = dyC * widthScale; + const sourceNearestCol = Math.min(xWidth - 1, alignCorners ? Math.round(sourceFracCol) : + Math.floor(sourceFracCol)); + if (c === sourceNearestCol) { + accum += dyValues[dyCOffset + d]; + } + } + } + output[colOffset + d] = accum; + } + } + } + } + return dist["tensor4d"](output, x.shape, x.dtype); + } + batchNorm(x, mean, variance, offset, scale, varianceEpsilon) { + Object(cpu_util["a" /* assertNotComplex */])([x, mean, variance, scale, offset], 'batchNorm'); + const xVals = this.readSync(x.dataId); + const mVals = this.readSync(mean.dataId); + const varVals = this.readSync(variance.dataId); + const sVals = scale ? this.readSync(scale.dataId) : + new Float32Array([1]); + const offVals = offset ? this.readSync(offset.dataId) : + new Float32Array([0]); + const outVals = new Float32Array(xVals.length); + const offValsLength = offVals.length; + const sValsLength = sVals.length; + const varValsLength = varVals.length; + const mValsLength = mVals.length; + let offi = 0; + let mi = 0; + let si = 0; + let vi = 0; + for (let i = 0; i < xVals.length; ++i) { + outVals[i] = offVals[offi++] + + (xVals[i] - mVals[mi++]) * sVals[si++] / + Math.sqrt(varVals[vi++] + varianceEpsilon); + if (offi >= offValsLength) { + offi = 0; + } + if (mi >= mValsLength) { + mi = 0; + } + if (si >= sValsLength) { + si = 0; + } + if (vi >= varValsLength) { + vi = 0; + } + } + return dist["tensor4d"](outVals, x.shape); + } + localResponseNormalization4D(x, depthRadius, bias, alpha, beta) { + Object(cpu_util["a" /* assertNotComplex */])(x, 'localResponseNormalization4D'); + const channels = x.shape[3]; + const maxD = channels - 1; + const xValues = this.readSync(x.dataId); + const size = x.size; + const result = new Float32Array(size); + function sumAcrossChannels(offset) { + const currentChannel = offset % channels; + let beginSumOffset = offset - currentChannel + Math.max(0, currentChannel - depthRadius); + const endSumOffset = offset - currentChannel + + Math.min(currentChannel + depthRadius, maxD); + let sum = 0.0; + for (; beginSumOffset <= endSumOffset; beginSumOffset++) { + const z = xValues[beginSumOffset]; + sum += z * z; + } + return sum; + } + for (let offset = 0; offset < size; offset++) { + const sum = sumAcrossChannels(offset); + const val = xValues[offset] * Math.pow(bias + alpha * sum, -beta); + result[offset] = val; + } + return dist["tensor4d"](result, x.shape); + } + LRNGrad(dy, inputImage, outputImage, depthRadius, bias, alpha, beta) { + Object(cpu_util["a" /* assertNotComplex */])(dy, 'LRNGrad'); + const channels = dy.shape[3]; + const dyValues = this.readSync(dy.dataId); + const inputImageValues = this.readSync(inputImage.dataId); + const outputImageValues = this.readSync(outputImage.dataId); + const result = new Float32Array(dy.size); + const size = dy.size; + for (let offset = 0; offset < size; offset++) { + const currentChannel = offset % channels; + const depthBegin = (offset - currentChannel) + Math.max(0, currentChannel - depthRadius); + const depthEnd = (offset - currentChannel) + + Math.min(channels, currentChannel + depthRadius + 1); + let norm = 0; + for (let k = depthBegin; k < depthEnd; k++) { + norm += Math.pow(inputImageValues[k], 2); + } + norm = alpha * norm + bias; + for (let k = depthBegin; k < depthEnd; k++) { + let dyi = -2 * alpha * beta * inputImageValues[k] * + outputImageValues[offset] / norm; + if (offset === k) { + dyi += Math.pow(norm, -beta); + } + dyi *= dyValues[offset]; + result[k] += dyi; + } + } + return dist["tensor4d"](result, dy.shape); + } + multinomial(logits, normalized, numSamples, seed) { + Object(cpu_util["a" /* assertNotComplex */])(logits, 'multinomial'); + const probabilities = normalized ? logits : dist["softmax"](logits); + const batchSize = probabilities.shape[0]; + const numEvents = probabilities.shape[1]; + const res = dist["zeros"]([batchSize, numSamples], 'int32'); + const resVals = this.readSync(res.dataId); + const probVals = this.readSync(probabilities.dataId); + for (let b = 0; b < batchSize; ++b) { + const offset = b * numEvents; + // The cdf won't include the last event. It will be implicit if no other + // event happened. + const cdf = new Float32Array(numEvents - 1); + cdf[0] = probVals[offset]; + for (let event = 1; event < cdf.length; ++event) { + cdf[event] = cdf[event - 1] + probVals[offset + event]; + } + const random = seedrandom["alea"](seed.toString()); + const outOffset = b * numSamples; + for (let sampleId = 0; sampleId < numSamples; ++sampleId) { + const r = random(); + // Assume last event happened by default. + resVals[outOffset + sampleId] = cdf.length; + for (let event = 0; event < cdf.length; event++) { + if (r < cdf[event]) { + resVals[outOffset + sampleId] = event; + break; + } + } + } + } + return res; + } + oneHot(indices, depth, onValue, offValue) { + Object(cpu_util["a" /* assertNotComplex */])(indices, 'oneHot'); + const res = new Float32Array(indices.size * depth); + res.fill(offValue); + const indicesVal = this.readSync(indices.dataId); + for (let event = 0; event < indices.size; ++event) { + if (indicesVal[event] >= 0 && indicesVal[event] < depth) { + res[event * depth + indicesVal[event]] = onValue; + } + } + return dist["tensor2d"](res, [indices.size, depth], 'int32'); + } + nonMaxSuppression(boxes, scores, maxOutputSize, iouThreshold, scoreThreshold) { + Object(cpu_util["a" /* assertNotComplex */])(boxes, 'nonMaxSuppression'); + const boxesVals = this.readSync(boxes.dataId); + const scoresVals = this.readSync(scores.dataId); + return nonMaxSuppressionV3(boxesVals, scoresVals, maxOutputSize, iouThreshold, scoreThreshold); + } + fft(x) { + return this.fftBatch(x, false); + } + ifft(x) { + return this.fftBatch(x, true); + } + /** + * Calculate FFT of inner most elements of batch tensor. + */ + fftBatch(x, inverse) { + const batch = x.shape[0]; + const innerDim = x.shape[1]; + // Collects real and imaginary values separately. + const realResult = dist["buffer"](x.shape, 'float32'); + const imagResult = dist["buffer"](x.shape, 'float32'); + const real = dist["real"](x).as2D(batch, innerDim); + const imag = dist["imag"](x).as2D(batch, innerDim); + for (let b = 0; b < batch; b++) { + // TODO: Support slice ops for complex type. + const r = real.slice([b, 0], [1, innerDim]); + const i = imag.slice([b, 0], [1, innerDim]); + const input = dist["complex"](r, i); + // Run FFT by batch element. + const res = this.readSync(this.fftImpl(input, inverse).dataId); + for (let d = 0; d < innerDim; d++) { + const c = dist["backend_util"].getComplexWithIndex(res, d); + realResult.values[b * innerDim + d] = c.real; + imagResult.values[b * innerDim + d] = c.imag; + } + } + const t = dist["complex"](realResult.toTensor(), imagResult.toTensor()); + return t.as2D(batch, innerDim); + } + fftImpl(x, inverse) { + const x1D = x.as1D(); + const n = x1D.size; + if (this.isExponentOf2(n)) { + let result = this.fftRadix2(x1D, n, inverse).as2D(x.shape[0], x.shape[1]); + if (inverse) { + result = dist["complex"](dist["real"](result).div(dist["scalar"](n)), dist["imag"](result).div(dist["scalar"](n))); + } + return result; + } + else { + const data = this.readSync(x.dataId); + const rawOutput = this.fourierTransformByMatmul(data, n, inverse); + const output = dist["backend_util"].splitRealAndImagArrays(rawOutput); + return dist["complex"](output.real, output.imag).as2D(x.shape[0], x.shape[1]); + } + } + isExponentOf2(size) { + return (size & size - 1) === 0; + } + // FFT using Cooley-Tukey algorithm on radix 2 dimensional input. + fftRadix2(input, size, inverse) { + if (size === 1) { + return input; + } + const data = this.readSync(input.dataId); + const half = size / 2; + const evenComplex = dist["backend_util"].complexWithEvenIndex(data); + let evenTensor = dist["complex"](evenComplex.real, evenComplex.imag).as1D(); + const oddComplex = dist["backend_util"].complexWithOddIndex(data); + let oddTensor = dist["complex"](oddComplex.real, oddComplex.imag).as1D(); + // Recursive call for half part of original input. + evenTensor = this.fftRadix2(evenTensor, half, inverse); + oddTensor = this.fftRadix2(oddTensor, half, inverse); + const e = dist["backend_util"].exponents(size, inverse); + const exponent = dist["complex"](e.real, e.imag).mul(oddTensor); + const addPart = evenTensor.add(exponent); + const subPart = evenTensor.sub(exponent); + const realTensor = dist["real"](addPart).concat(dist["real"](subPart)); + const imagTensor = dist["imag"](addPart).concat(dist["imag"](subPart)); + return dist["complex"](realTensor, imagTensor).as1D(); + } + // Calculate fourier transform by multplying sinusoid matrix. + fourierTransformByMatmul(data, size, inverse) { + const ret = new Float32Array(size * 2); + // TODO: Use matmul instead once it supports complex64 type. + for (let r = 0; r < size; r++) { + let real = 0.0; + let imag = 0.0; + for (let c = 0; c < size; c++) { + const e = dist["backend_util"].exponent(r * c, size, inverse); + const term = dist["backend_util"].getComplexWithIndex(data, c); + real += term.real * e.real - term.imag * e.imag; + imag += term.real * e.imag + term.imag * e.real; + } + if (inverse) { + real /= size; + imag /= size; + } + dist["backend_util"].assignToTypedArray(ret, real, imag, r); + } + return ret; + } + depthToSpace(x, blockSize, dataFormat) { + dist["util"].assert(dataFormat === 'NHWC', () => `Only NHWC dataFormat supported on CPU for depthToSpace. Got ${dataFormat}`); + dist["util"].assert(blockSize > 1, () => `blockSize should be > 1 for depthToSpace, but was: ${blockSize}`); + const batchSize = x.shape[0]; + const inputHeight = x.shape[1]; + const inputWidth = x.shape[2]; + const inputDepth = x.shape[3]; + const outputHeight = inputHeight * blockSize; + const outputWidth = inputWidth * blockSize; + const outputDepth = inputDepth / (blockSize * blockSize); + const xValues = this.readSync(x.dataId); + const result = new Float32Array(batchSize * outputHeight * outputWidth * outputDepth); + let outputIdx = 0; + for (let b = 0; b < batchSize; ++b) { + for (let h = 0; h < outputHeight; ++h) { + const inH = Math.floor(h / blockSize); + const offsetH = (h % blockSize); + for (let w = 0; w < outputWidth; ++w) { + const inW = Math.floor(w / blockSize); + const offsetW = (w % blockSize); + const offsetD = (offsetH * blockSize + offsetW) * outputDepth; + for (let d = 0; d < outputDepth; ++d) { + const inD = d + offsetD; + const inputIdx = inD + inputDepth * (inW + inputWidth * (inH + inputHeight * b)); + result[outputIdx++] = xValues[inputIdx]; + } + } + } + } + return dist["tensor4d"](result, [batchSize, outputHeight, outputWidth, outputDepth]); + } + broadcastedBinaryOp(a, b, dtype, op) { + const newShape = dist["backend_util"].assertAndGetBroadcastShape(a.shape, b.shape); + const result = dist["buffer"](newShape, dtype); + const aVals = this.readSync(a.dataId); + const bVals = this.readSync(b.dataId); + const aBroadcastDims = dist["backend_util"].getBroadcastDims(a.shape, newShape); + const bBroadcastDims = dist["backend_util"].getBroadcastDims(b.shape, newShape); + const resVals = result.values; + if (aBroadcastDims.length + bBroadcastDims.length === 0) { + for (let i = 0; i < resVals.length; ++i) { + resVals[i] = op(aVals[i % aVals.length], bVals[i % bVals.length]); + } + } + else { + const aBuf = this.bufferSync(a); + const bBuf = this.bufferSync(b); + for (let i = 0; i < resVals.length; ++i) { + const loc = result.indexToLoc(i); + const aLoc = loc.slice(-a.rank); + aBroadcastDims.forEach(d => aLoc[d] = 0); + const aIndex = aBuf.locToIndex(aLoc); + const bLoc = loc.slice(-b.rank); + bBroadcastDims.forEach(d => bLoc[d] = 0); + const bIndex = bBuf.locToIndex(bLoc); + resVals[i] = op(aVals[aIndex], bVals[bIndex]); + } + } + return result.toTensor(); + } + broadcastedBinaryComplexOp(a, b, op) { + const newShape = dist["backend_util"].assertAndGetBroadcastShape(a.shape, b.shape); + const realResult = dist["buffer"](newShape, 'float32'); + const imagResult = dist["buffer"](newShape, 'float32'); + const aVals = this.readSync(a.dataId); + const bVals = this.readSync(b.dataId); + const aBroadcastDims = dist["backend_util"].getBroadcastDims(a.shape, newShape); + const bBroadcastDims = dist["backend_util"].getBroadcastDims(b.shape, newShape); + const realVals = realResult.values; + const imagVals = imagResult.values; + if (aBroadcastDims.length + bBroadcastDims.length === 0) { + for (let i = 0; i < realVals.length; i++) { + const aIdx = i % aVals.length; + const bIdx = i % bVals.length; + const result = op(aVals[aIdx * 2], aVals[aIdx * 2 + 1], bVals[bIdx * 2], bVals[bIdx * 2 + 1]); + realVals[i] = result.real; + imagVals[i] = result.imag; + } + } + else { + const aRealBuf = this.bufferSync(this.data.get(a.dataId).complexTensors.real); + const bRealBuf = this.bufferSync(this.data.get(b.dataId).complexTensors.real); + for (let i = 0; i < realVals.length; i++) { + const loc = realResult.indexToLoc(i); + const aLoc = loc.slice(-a.rank); + aBroadcastDims.forEach(d => aLoc[d] = 0); + const aIndex = aRealBuf.locToIndex(aLoc); + const bLoc = loc.slice(-b.rank); + bBroadcastDims.forEach(d => bLoc[d] = 0); + const bIndex = bRealBuf.locToIndex(bLoc); + const opResult = op(aVals[aIndex * 2], aVals[aIndex * 2 + 1], bVals[bIndex * 2], bVals[bIndex * 2 + 1]); + realVals[i] = opResult.real; + imagVals[i] = opResult.imag; + } + } + return this.complex(realResult.toTensor(), imagResult.toTensor()); + } + split(x, sizeSplits, axis) { + return split(x, sizeSplits, axis); + } + dispose() { } + floatPrecision() { + return 32; + } + /** Returns the smallest representable number. */ + epsilon() { + return super.epsilon(); + } + cropAndResize(images, boxes, boxIndex, cropSize, method, extrapolationValue) { + const [batch, imageHeight, imageWidth, numChannels] = images.shape; + const numBoxes = boxes.shape[0]; + const [cropHeight, cropWidth] = cropSize; + const output = dist["buffer"]([numBoxes, cropHeight, cropWidth, numChannels], 'float32'); + const boxVals = this.readSync(boxes.dataId); + const boxIndVals = this.readSync(boxIndex.dataId); + const imageVals = this.readSync(images.dataId); + const inStride = images.strides; // to calculate flat indexes into image + const outStride = output.strides; // to calculate flat indexes into output + // Reference implementation + // tslint:disable-next-line:max-line-length + // https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/crop_and_resize_op.cc + for (let b = 0; b < numBoxes; b++) { + const startInd = b * 4; + const y1 = boxVals[startInd]; + const x1 = boxVals[startInd + 1]; + const y2 = boxVals[startInd + 2]; + const x2 = boxVals[startInd + 3]; + const bInd = boxIndVals[b]; + if (bInd >= batch) { + continue; + } + const heightScale = (cropHeight > 1) ? + (y2 - y1) * (imageHeight - 1) / (cropHeight - 1) : + 0; + const widthScale = (cropWidth > 1) ? (x2 - x1) * (imageWidth - 1) / (cropWidth - 1) : 0; + for (let y = 0; y < cropHeight; y++) { + const yInd = (cropHeight > 1) ? + y1 * (imageHeight - 1) + y * (heightScale) : + 0.5 * (y1 + y2) * (imageHeight - 1); + if (yInd < 0 || yInd > imageHeight - 1) { + for (let x = 0; x < cropWidth; x++) { + for (let c = 0; c < numChannels; c++) { + const ind = c + x * outStride[2] + y * outStride[1] + b * outStride[0]; + output.values[ind] = extrapolationValue; + } + } + continue; + } + if (method === 'bilinear') { + const topInd = Math.floor(yInd); + const bottomInd = Math.ceil(yInd); + const yLerp = yInd - topInd; + for (let x = 0; x < cropWidth; x++) { + const xInd = (cropWidth > 1) ? + x1 * (imageWidth - 1) + x * widthScale : + 0.5 * (x1 + x2) * (imageWidth - 1); + if (xInd < 0 || xInd > imageWidth - 1) { + for (let c = 0; c < numChannels; c++) { + const ind = c + x * outStride[2] + y * outStride[1] + b * outStride[0]; + output.values[ind] = extrapolationValue; + } + continue; + } + const leftInd = Math.floor(xInd); + const rightInd = Math.ceil(xInd); + const xLerp = xInd - leftInd; + for (let c = 0; c < numChannels; c++) { + let ind = c + leftInd * inStride[2] + topInd * inStride[1] + + bInd * inStride[0]; + const topLeft = imageVals[ind]; + ind = c + rightInd * inStride[2] + topInd * inStride[1] + + bInd * inStride[0]; + const topRight = imageVals[ind]; + ind = c + leftInd * inStride[2] + bottomInd * inStride[1] + + bInd * inStride[0]; + const bottomLeft = imageVals[ind]; + ind = c + rightInd * inStride[2] + bottomInd * inStride[1] + + bInd * inStride[0]; + const bottomRight = imageVals[ind]; + const top = topLeft + (topRight - topLeft) * xLerp; + const bottom = bottomLeft + (bottomRight - bottomLeft) * xLerp; + ind = c + x * outStride[2] + y * outStride[1] + b * outStride[0]; + output.values[ind] = top + ((bottom - top) * yLerp); + } + } + } + else { // method == "nearest" + for (let x = 0; x < cropWidth; ++x) { + const xInd = (cropWidth > 1) ? + x1 * (imageWidth - 1) + x * widthScale : + 0.5 * (x1 + x2) * (imageWidth - 1); + if (xInd < 0 || xInd > imageWidth - 1) { + for (let c = 0; c < numChannels; c++) { + const ind = c + x * outStride[2] + y * outStride[1] + b * outStride[0]; + output.values[ind] = extrapolationValue; + } + continue; + } + const closestX = Math.round(xInd); + const closestY = Math.round(yInd); + for (let c = 0; c < numChannels; c++) { + const inInd = c + closestX * inStride[2] + + closestY * inStride[1] + bInd * inStride[0]; + const outInd = c + x * outStride[2] + y * outStride[1] + b * outStride[0]; + output.values[outInd] = imageVals[inInd]; + } + } + } + } + } + return output.toTensor(); + } + sparseToDense(sparseIndices, sparseValues, outputShape, defaultValue) { + const { sliceRank, numUpdates, sliceSize, strides, outputSize } = dist["backend_util"].calculateShapes(sparseValues, sparseIndices, outputShape); + const sumDupeIndices = false; + return this.scatter(sparseIndices, sparseValues, outputShape, outputSize, sliceSize, numUpdates, sliceRank, strides, defaultValue, sumDupeIndices); + } + gatherND(x, indices) { + const indicesShape = indices.shape; + const sliceRank = indicesShape[indicesShape.length - 1]; + const [resultShape, numSlices, sliceSize, strides] = dist["backend_util"].prepareAndValidate(x, indices); + if (numSlices === 0) { + return dist["tensor"]([], resultShape, x.dtype); + } + const buffer = new dist["TensorBuffer"]([numSlices, sliceSize], x.dtype); + const indicesData = this.readSync(indices.dataId); + const xData = this.readSync(x.dataId); + for (let i = 0; i < numSlices; i++) { + const index = []; + let flattenIndex = 0; + for (let j = 0; j < sliceRank; j++) { + const dim = indicesData[i * sliceRank + j]; + flattenIndex += dim * strides[j]; + index.push(dim); + } + if (flattenIndex < 0 || flattenIndex >= x.size / sliceSize) { + throw new Error(`Invalid indices: ${index} does not index into ${x.shape}`); + } + for (let k = 0; k < sliceSize; k++) { + buffer.values[i * sliceSize + k] = xData[flattenIndex * sliceSize + k]; + } + } + return buffer.toTensor().reshape(resultShape); + } + scatterND(indices, updates, shape) { + const { sliceRank, numUpdates, sliceSize, strides, outputSize } = dist["backend_util"].calculateShapes(updates, indices, shape); + const defaultValue = dist["scalar"](0); + const sumDupeIndices = true; + return this.scatter(indices, updates, shape, outputSize, sliceSize, numUpdates, sliceRank, strides, defaultValue, sumDupeIndices); + } + fill(shape, value, dtype) { + dtype = dtype || dist["util"].inferDtype(value); + const values = dist["util"].getArrayFromDType(dtype, dist["util"].sizeFromShape(shape)); + values.fill(value); + return Object(dist["engine"])().makeTensor(values, shape, dtype, this); + } + onesLike(x) { + if (x.dtype === 'string') { + throw new Error('onesLike is not supported for string tensors'); + } + else { + return this.fill(x.shape, 1, x.dtype); + } + } + zerosLike(x) { + const values = dist["util"].getArrayFromDType(x.dtype, dist["util"].sizeFromShape(x.shape)); + return this.makeOutput(values, x.shape, x.dtype); + } + linspace(start, stop, num) { + return dist["backend_util"].linspaceImpl(start, stop, num); + } + scatter(indices, updates, shape, outputSize, sliceSize, numUpdates, sliceRank, strides, defaultValue, sumDupeIndices) { + const flattenShape = [outputSize / sliceSize, sliceSize]; + const indicesData = this.readSync(indices.dataId); + const updatesData = this.readSync(updates.dataId); + if (outputSize === 0) { + return dist["tensor"]([], shape, updates.dtype); + } + const buffer = new dist["TensorBuffer"](flattenShape, updates.dtype); + buffer.values.fill(this.readSync(defaultValue.dataId)[0]); + for (let i = 0; i < numUpdates; i++) { + const index = []; + let flattenIndex = 0; + for (let j = 0; j < sliceRank; j++) { + const dim = indicesData[i * sliceRank + j]; + index.push(dim); + flattenIndex += dim * strides[j]; + } + if (flattenIndex < 0 || flattenIndex >= outputSize / sliceSize) { + throw new Error(`Invalid indices: ${index} does not index into ${shape}`); + } + for (let k = 0; k < sliceSize; k++) { + if (sumDupeIndices) { + buffer.values[flattenIndex * sliceSize + k] += + updatesData[i * sliceSize + k]; + } + else { + buffer.values[flattenIndex * sliceSize + k] = updates.rank === 0 ? + updatesData[0] : + updatesData[i * sliceSize + k]; + } + } + } + return buffer.toTensor().reshape(shape); + } +} +//# sourceMappingURL=backend_cpu.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/version.js +/** @license See the LICENSE file. */ +// This code is auto-generated, do not modify this file! +const version = '2.0.1'; + +//# sourceMappingURL=version.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-backend-cpu/dist/base.js +/** + * @license + * Copyright 2020 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +/* + * base.ts contains all the exports from tfjs-backend-cpu + * that do not trigger side effects. + */ + + + + +//# sourceMappingURL=base.js.map + +/***/ }), +/* 32 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(global) {var require;var require;/*! + localForage -- Offline Storage, Improved + Version 1.7.3 + https://localforage.github.io/localForage + (c) 2013-2017 Mozilla, Apache License 2.0 +*/ +(function(f){if(true){module.exports=f()}else { var g; }})(function(){var define,module,exports;return (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return require(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw (f.code="MODULE_NOT_FOUND", f)}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o element; its readystatechange event will be fired asynchronously once it is inserted + // into the document. Do so, thus queuing up the task. Remember to clean up once it's been called. + var scriptEl = global.document.createElement('script'); + scriptEl.onreadystatechange = function () { + nextTick(); + + scriptEl.onreadystatechange = null; + scriptEl.parentNode.removeChild(scriptEl); + scriptEl = null; + }; + global.document.documentElement.appendChild(scriptEl); + }; + } else { + scheduleDrain = function () { + setTimeout(nextTick, 0); + }; + } +} + +var draining; +var queue = []; +//named nextTick for less confusing stack traces +function nextTick() { + draining = true; + var i, oldQueue; + var len = queue.length; + while (len) { + oldQueue = queue; + queue = []; + i = -1; + while (++i < len) { + oldQueue[i](); + } + len = queue.length; + } + draining = false; +} + +module.exports = immediate; +function immediate(task) { + if (queue.push(task) === 1 && !draining) { + scheduleDrain(); + } +} + +}).call(this,typeof global !== "undefined" ? global : typeof self !== "undefined" ? self : typeof window !== "undefined" ? window : {}) +},{}],2:[function(_dereq_,module,exports){ +'use strict'; +var immediate = _dereq_(1); + +/* istanbul ignore next */ +function INTERNAL() {} + +var handlers = {}; + +var REJECTED = ['REJECTED']; +var FULFILLED = ['FULFILLED']; +var PENDING = ['PENDING']; + +module.exports = Promise; + +function Promise(resolver) { + if (typeof resolver !== 'function') { + throw new TypeError('resolver must be a function'); + } + this.state = PENDING; + this.queue = []; + this.outcome = void 0; + if (resolver !== INTERNAL) { + safelyResolveThenable(this, resolver); + } +} + +Promise.prototype["catch"] = function (onRejected) { + return this.then(null, onRejected); +}; +Promise.prototype.then = function (onFulfilled, onRejected) { + if (typeof onFulfilled !== 'function' && this.state === FULFILLED || + typeof onRejected !== 'function' && this.state === REJECTED) { + return this; + } + var promise = new this.constructor(INTERNAL); + if (this.state !== PENDING) { + var resolver = this.state === FULFILLED ? onFulfilled : onRejected; + unwrap(promise, resolver, this.outcome); + } else { + this.queue.push(new QueueItem(promise, onFulfilled, onRejected)); + } + + return promise; +}; +function QueueItem(promise, onFulfilled, onRejected) { + this.promise = promise; + if (typeof onFulfilled === 'function') { + this.onFulfilled = onFulfilled; + this.callFulfilled = this.otherCallFulfilled; + } + if (typeof onRejected === 'function') { + this.onRejected = onRejected; + this.callRejected = this.otherCallRejected; + } +} +QueueItem.prototype.callFulfilled = function (value) { + handlers.resolve(this.promise, value); +}; +QueueItem.prototype.otherCallFulfilled = function (value) { + unwrap(this.promise, this.onFulfilled, value); +}; +QueueItem.prototype.callRejected = function (value) { + handlers.reject(this.promise, value); +}; +QueueItem.prototype.otherCallRejected = function (value) { + unwrap(this.promise, this.onRejected, value); +}; + +function unwrap(promise, func, value) { + immediate(function () { + var returnValue; + try { + returnValue = func(value); + } catch (e) { + return handlers.reject(promise, e); + } + if (returnValue === promise) { + handlers.reject(promise, new TypeError('Cannot resolve promise with itself')); + } else { + handlers.resolve(promise, returnValue); + } + }); +} + +handlers.resolve = function (self, value) { + var result = tryCatch(getThen, value); + if (result.status === 'error') { + return handlers.reject(self, result.value); + } + var thenable = result.value; + + if (thenable) { + safelyResolveThenable(self, thenable); + } else { + self.state = FULFILLED; + self.outcome = value; + var i = -1; + var len = self.queue.length; + while (++i < len) { + self.queue[i].callFulfilled(value); + } + } + return self; +}; +handlers.reject = function (self, error) { + self.state = REJECTED; + self.outcome = error; + var i = -1; + var len = self.queue.length; + while (++i < len) { + self.queue[i].callRejected(error); + } + return self; +}; + +function getThen(obj) { + // Make sure we only access the accessor once as required by the spec + var then = obj && obj.then; + if (obj && (typeof obj === 'object' || typeof obj === 'function') && typeof then === 'function') { + return function appyThen() { + then.apply(obj, arguments); + }; + } +} + +function safelyResolveThenable(self, thenable) { + // Either fulfill, reject or reject with error + var called = false; + function onError(value) { + if (called) { + return; + } + called = true; + handlers.reject(self, value); + } + + function onSuccess(value) { + if (called) { + return; + } + called = true; + handlers.resolve(self, value); + } + + function tryToUnwrap() { + thenable(onSuccess, onError); + } + + var result = tryCatch(tryToUnwrap); + if (result.status === 'error') { + onError(result.value); + } +} + +function tryCatch(func, value) { + var out = {}; + try { + out.value = func(value); + out.status = 'success'; + } catch (e) { + out.status = 'error'; + out.value = e; + } + return out; +} + +Promise.resolve = resolve; +function resolve(value) { + if (value instanceof this) { + return value; + } + return handlers.resolve(new this(INTERNAL), value); +} + +Promise.reject = reject; +function reject(reason) { + var promise = new this(INTERNAL); + return handlers.reject(promise, reason); +} + +Promise.all = all; +function all(iterable) { + var self = this; + if (Object.prototype.toString.call(iterable) !== '[object Array]') { + return this.reject(new TypeError('must be an array')); + } + + var len = iterable.length; + var called = false; + if (!len) { + return this.resolve([]); + } + + var values = new Array(len); + var resolved = 0; + var i = -1; + var promise = new this(INTERNAL); + + while (++i < len) { + allResolver(iterable[i], i); + } + return promise; + function allResolver(value, i) { + self.resolve(value).then(resolveFromAll, function (error) { + if (!called) { + called = true; + handlers.reject(promise, error); + } + }); + function resolveFromAll(outValue) { + values[i] = outValue; + if (++resolved === len && !called) { + called = true; + handlers.resolve(promise, values); + } + } + } +} + +Promise.race = race; +function race(iterable) { + var self = this; + if (Object.prototype.toString.call(iterable) !== '[object Array]') { + return this.reject(new TypeError('must be an array')); + } + + var len = iterable.length; + var called = false; + if (!len) { + return this.resolve([]); + } + + var i = -1; + var promise = new this(INTERNAL); + + while (++i < len) { + resolver(iterable[i]); + } + return promise; + function resolver(value) { + self.resolve(value).then(function (response) { + if (!called) { + called = true; + handlers.resolve(promise, response); + } + }, function (error) { + if (!called) { + called = true; + handlers.reject(promise, error); + } + }); + } +} + +},{"1":1}],3:[function(_dereq_,module,exports){ +(function (global){ +'use strict'; +if (typeof global.Promise !== 'function') { + global.Promise = _dereq_(2); +} + +}).call(this,typeof global !== "undefined" ? global : typeof self !== "undefined" ? self : typeof window !== "undefined" ? window : {}) +},{"2":2}],4:[function(_dereq_,module,exports){ +'use strict'; + +var _typeof = typeof Symbol === "function" && typeof Symbol.iterator === "symbol" ? function (obj) { return typeof obj; } : function (obj) { return obj && typeof Symbol === "function" && obj.constructor === Symbol && obj !== Symbol.prototype ? "symbol" : typeof obj; }; + +function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } } + +function getIDB() { + /* global indexedDB,webkitIndexedDB,mozIndexedDB,OIndexedDB,msIndexedDB */ + try { + if (typeof indexedDB !== 'undefined') { + return indexedDB; + } + if (typeof webkitIndexedDB !== 'undefined') { + return webkitIndexedDB; + } + if (typeof mozIndexedDB !== 'undefined') { + return mozIndexedDB; + } + if (typeof OIndexedDB !== 'undefined') { + return OIndexedDB; + } + if (typeof msIndexedDB !== 'undefined') { + return msIndexedDB; + } + } catch (e) { + return; + } +} + +var idb = getIDB(); + +function isIndexedDBValid() { + try { + // Initialize IndexedDB; fall back to vendor-prefixed versions + // if needed. + if (!idb) { + return false; + } + // We mimic PouchDB here; + // + // We test for openDatabase because IE Mobile identifies itself + // as Safari. Oh the lulz... + var isSafari = typeof openDatabase !== 'undefined' && /(Safari|iPhone|iPad|iPod)/.test(navigator.userAgent) && !/Chrome/.test(navigator.userAgent) && !/BlackBerry/.test(navigator.platform); + + var hasFetch = typeof fetch === 'function' && fetch.toString().indexOf('[native code') !== -1; + + // Safari <10.1 does not meet our requirements for IDB support (#5572) + // since Safari 10.1 shipped with fetch, we can use that to detect it + return (!isSafari || hasFetch) && typeof indexedDB !== 'undefined' && + // some outdated implementations of IDB that appear on Samsung + // and HTC Android devices <4.4 are missing IDBKeyRange + // See: https://github.com/mozilla/localForage/issues/128 + // See: https://github.com/mozilla/localForage/issues/272 + typeof IDBKeyRange !== 'undefined'; + } catch (e) { + return false; + } +} + +// Abstracts constructing a Blob object, so it also works in older +// browsers that don't support the native Blob constructor. (i.e. +// old QtWebKit versions, at least). +// Abstracts constructing a Blob object, so it also works in older +// browsers that don't support the native Blob constructor. (i.e. +// old QtWebKit versions, at least). +function createBlob(parts, properties) { + /* global BlobBuilder,MSBlobBuilder,MozBlobBuilder,WebKitBlobBuilder */ + parts = parts || []; + properties = properties || {}; + try { + return new Blob(parts, properties); + } catch (e) { + if (e.name !== 'TypeError') { + throw e; + } + var Builder = typeof BlobBuilder !== 'undefined' ? BlobBuilder : typeof MSBlobBuilder !== 'undefined' ? MSBlobBuilder : typeof MozBlobBuilder !== 'undefined' ? MozBlobBuilder : WebKitBlobBuilder; + var builder = new Builder(); + for (var i = 0; i < parts.length; i += 1) { + builder.append(parts[i]); + } + return builder.getBlob(properties.type); + } +} + +// This is CommonJS because lie is an external dependency, so Rollup +// can just ignore it. +if (typeof Promise === 'undefined') { + // In the "nopromises" build this will just throw if you don't have + // a global promise object, but it would throw anyway later. + _dereq_(3); +} +var Promise$1 = Promise; + +function executeCallback(promise, callback) { + if (callback) { + promise.then(function (result) { + callback(null, result); + }, function (error) { + callback(error); + }); + } +} + +function executeTwoCallbacks(promise, callback, errorCallback) { + if (typeof callback === 'function') { + promise.then(callback); + } + + if (typeof errorCallback === 'function') { + promise["catch"](errorCallback); + } +} + +function normalizeKey(key) { + // Cast the key to a string, as that's all we can set as a key. + if (typeof key !== 'string') { + console.warn(key + ' used as a key, but it is not a string.'); + key = String(key); + } + + return key; +} + +function getCallback() { + if (arguments.length && typeof arguments[arguments.length - 1] === 'function') { + return arguments[arguments.length - 1]; + } +} + +// Some code originally from async_storage.js in +// [Gaia](https://github.com/mozilla-b2g/gaia). + +var DETECT_BLOB_SUPPORT_STORE = 'local-forage-detect-blob-support'; +var supportsBlobs = void 0; +var dbContexts = {}; +var toString = Object.prototype.toString; + +// Transaction Modes +var READ_ONLY = 'readonly'; +var READ_WRITE = 'readwrite'; + +// Transform a binary string to an array buffer, because otherwise +// weird stuff happens when you try to work with the binary string directly. +// It is known. +// From http://stackoverflow.com/questions/14967647/ (continues on next line) +// encode-decode-image-with-base64-breaks-image (2013-04-21) +function _binStringToArrayBuffer(bin) { + var length = bin.length; + var buf = new ArrayBuffer(length); + var arr = new Uint8Array(buf); + for (var i = 0; i < length; i++) { + arr[i] = bin.charCodeAt(i); + } + return buf; +} + +// +// Blobs are not supported in all versions of IndexedDB, notably +// Chrome <37 and Android <5. In those versions, storing a blob will throw. +// +// Various other blob bugs exist in Chrome v37-42 (inclusive). +// Detecting them is expensive and confusing to users, and Chrome 37-42 +// is at very low usage worldwide, so we do a hacky userAgent check instead. +// +// content-type bug: https://code.google.com/p/chromium/issues/detail?id=408120 +// 404 bug: https://code.google.com/p/chromium/issues/detail?id=447916 +// FileReader bug: https://code.google.com/p/chromium/issues/detail?id=447836 +// +// Code borrowed from PouchDB. See: +// https://github.com/pouchdb/pouchdb/blob/master/packages/node_modules/pouchdb-adapter-idb/src/blobSupport.js +// +function _checkBlobSupportWithoutCaching(idb) { + return new Promise$1(function (resolve) { + var txn = idb.transaction(DETECT_BLOB_SUPPORT_STORE, READ_WRITE); + var blob = createBlob(['']); + txn.objectStore(DETECT_BLOB_SUPPORT_STORE).put(blob, 'key'); + + txn.onabort = function (e) { + // If the transaction aborts now its due to not being able to + // write to the database, likely due to the disk being full + e.preventDefault(); + e.stopPropagation(); + resolve(false); + }; + + txn.oncomplete = function () { + var matchedChrome = navigator.userAgent.match(/Chrome\/(\d+)/); + var matchedEdge = navigator.userAgent.match(/Edge\//); + // MS Edge pretends to be Chrome 42: + // https://msdn.microsoft.com/en-us/library/hh869301%28v=vs.85%29.aspx + resolve(matchedEdge || !matchedChrome || parseInt(matchedChrome[1], 10) >= 43); + }; + })["catch"](function () { + return false; // error, so assume unsupported + }); +} + +function _checkBlobSupport(idb) { + if (typeof supportsBlobs === 'boolean') { + return Promise$1.resolve(supportsBlobs); + } + return _checkBlobSupportWithoutCaching(idb).then(function (value) { + supportsBlobs = value; + return supportsBlobs; + }); +} + +function _deferReadiness(dbInfo) { + var dbContext = dbContexts[dbInfo.name]; + + // Create a deferred object representing the current database operation. + var deferredOperation = {}; + + deferredOperation.promise = new Promise$1(function (resolve, reject) { + deferredOperation.resolve = resolve; + deferredOperation.reject = reject; + }); + + // Enqueue the deferred operation. + dbContext.deferredOperations.push(deferredOperation); + + // Chain its promise to the database readiness. + if (!dbContext.dbReady) { + dbContext.dbReady = deferredOperation.promise; + } else { + dbContext.dbReady = dbContext.dbReady.then(function () { + return deferredOperation.promise; + }); + } +} + +function _advanceReadiness(dbInfo) { + var dbContext = dbContexts[dbInfo.name]; + + // Dequeue a deferred operation. + var deferredOperation = dbContext.deferredOperations.pop(); + + // Resolve its promise (which is part of the database readiness + // chain of promises). + if (deferredOperation) { + deferredOperation.resolve(); + return deferredOperation.promise; + } +} + +function _rejectReadiness(dbInfo, err) { + var dbContext = dbContexts[dbInfo.name]; + + // Dequeue a deferred operation. + var deferredOperation = dbContext.deferredOperations.pop(); + + // Reject its promise (which is part of the database readiness + // chain of promises). + if (deferredOperation) { + deferredOperation.reject(err); + return deferredOperation.promise; + } +} + +function _getConnection(dbInfo, upgradeNeeded) { + return new Promise$1(function (resolve, reject) { + dbContexts[dbInfo.name] = dbContexts[dbInfo.name] || createDbContext(); + + if (dbInfo.db) { + if (upgradeNeeded) { + _deferReadiness(dbInfo); + dbInfo.db.close(); + } else { + return resolve(dbInfo.db); + } + } + + var dbArgs = [dbInfo.name]; + + if (upgradeNeeded) { + dbArgs.push(dbInfo.version); + } + + var openreq = idb.open.apply(idb, dbArgs); + + if (upgradeNeeded) { + openreq.onupgradeneeded = function (e) { + var db = openreq.result; + try { + db.createObjectStore(dbInfo.storeName); + if (e.oldVersion <= 1) { + // Added when support for blob shims was added + db.createObjectStore(DETECT_BLOB_SUPPORT_STORE); + } + } catch (ex) { + if (ex.name === 'ConstraintError') { + console.warn('The database "' + dbInfo.name + '"' + ' has been upgraded from version ' + e.oldVersion + ' to version ' + e.newVersion + ', but the storage "' + dbInfo.storeName + '" already exists.'); + } else { + throw ex; + } + } + }; + } + + openreq.onerror = function (e) { + e.preventDefault(); + reject(openreq.error); + }; + + openreq.onsuccess = function () { + resolve(openreq.result); + _advanceReadiness(dbInfo); + }; + }); +} + +function _getOriginalConnection(dbInfo) { + return _getConnection(dbInfo, false); +} + +function _getUpgradedConnection(dbInfo) { + return _getConnection(dbInfo, true); +} + +function _isUpgradeNeeded(dbInfo, defaultVersion) { + if (!dbInfo.db) { + return true; + } + + var isNewStore = !dbInfo.db.objectStoreNames.contains(dbInfo.storeName); + var isDowngrade = dbInfo.version < dbInfo.db.version; + var isUpgrade = dbInfo.version > dbInfo.db.version; + + if (isDowngrade) { + // If the version is not the default one + // then warn for impossible downgrade. + if (dbInfo.version !== defaultVersion) { + console.warn('The database "' + dbInfo.name + '"' + " can't be downgraded from version " + dbInfo.db.version + ' to version ' + dbInfo.version + '.'); + } + // Align the versions to prevent errors. + dbInfo.version = dbInfo.db.version; + } + + if (isUpgrade || isNewStore) { + // If the store is new then increment the version (if needed). + // This will trigger an "upgradeneeded" event which is required + // for creating a store. + if (isNewStore) { + var incVersion = dbInfo.db.version + 1; + if (incVersion > dbInfo.version) { + dbInfo.version = incVersion; + } + } + + return true; + } + + return false; +} + +// encode a blob for indexeddb engines that don't support blobs +function _encodeBlob(blob) { + return new Promise$1(function (resolve, reject) { + var reader = new FileReader(); + reader.onerror = reject; + reader.onloadend = function (e) { + var base64 = btoa(e.target.result || ''); + resolve({ + __local_forage_encoded_blob: true, + data: base64, + type: blob.type + }); + }; + reader.readAsBinaryString(blob); + }); +} + +// decode an encoded blob +function _decodeBlob(encodedBlob) { + var arrayBuff = _binStringToArrayBuffer(atob(encodedBlob.data)); + return createBlob([arrayBuff], { type: encodedBlob.type }); +} + +// is this one of our fancy encoded blobs? +function _isEncodedBlob(value) { + return value && value.__local_forage_encoded_blob; +} + +// Specialize the default `ready()` function by making it dependent +// on the current database operations. Thus, the driver will be actually +// ready when it's been initialized (default) *and* there are no pending +// operations on the database (initiated by some other instances). +function _fullyReady(callback) { + var self = this; + + var promise = self._initReady().then(function () { + var dbContext = dbContexts[self._dbInfo.name]; + + if (dbContext && dbContext.dbReady) { + return dbContext.dbReady; + } + }); + + executeTwoCallbacks(promise, callback, callback); + return promise; +} + +// Try to establish a new db connection to replace the +// current one which is broken (i.e. experiencing +// InvalidStateError while creating a transaction). +function _tryReconnect(dbInfo) { + _deferReadiness(dbInfo); + + var dbContext = dbContexts[dbInfo.name]; + var forages = dbContext.forages; + + for (var i = 0; i < forages.length; i++) { + var forage = forages[i]; + if (forage._dbInfo.db) { + forage._dbInfo.db.close(); + forage._dbInfo.db = null; + } + } + dbInfo.db = null; + + return _getOriginalConnection(dbInfo).then(function (db) { + dbInfo.db = db; + if (_isUpgradeNeeded(dbInfo)) { + // Reopen the database for upgrading. + return _getUpgradedConnection(dbInfo); + } + return db; + }).then(function (db) { + // store the latest db reference + // in case the db was upgraded + dbInfo.db = dbContext.db = db; + for (var i = 0; i < forages.length; i++) { + forages[i]._dbInfo.db = db; + } + })["catch"](function (err) { + _rejectReadiness(dbInfo, err); + throw err; + }); +} + +// FF doesn't like Promises (micro-tasks) and IDDB store operations, +// so we have to do it with callbacks +function createTransaction(dbInfo, mode, callback, retries) { + if (retries === undefined) { + retries = 1; + } + + try { + var tx = dbInfo.db.transaction(dbInfo.storeName, mode); + callback(null, tx); + } catch (err) { + if (retries > 0 && (!dbInfo.db || err.name === 'InvalidStateError' || err.name === 'NotFoundError')) { + return Promise$1.resolve().then(function () { + if (!dbInfo.db || err.name === 'NotFoundError' && !dbInfo.db.objectStoreNames.contains(dbInfo.storeName) && dbInfo.version <= dbInfo.db.version) { + // increase the db version, to create the new ObjectStore + if (dbInfo.db) { + dbInfo.version = dbInfo.db.version + 1; + } + // Reopen the database for upgrading. + return _getUpgradedConnection(dbInfo); + } + }).then(function () { + return _tryReconnect(dbInfo).then(function () { + createTransaction(dbInfo, mode, callback, retries - 1); + }); + })["catch"](callback); + } + + callback(err); + } +} + +function createDbContext() { + return { + // Running localForages sharing a database. + forages: [], + // Shared database. + db: null, + // Database readiness (promise). + dbReady: null, + // Deferred operations on the database. + deferredOperations: [] + }; +} + +// Open the IndexedDB database (automatically creates one if one didn't +// previously exist), using any options set in the config. +function _initStorage(options) { + var self = this; + var dbInfo = { + db: null + }; + + if (options) { + for (var i in options) { + dbInfo[i] = options[i]; + } + } + + // Get the current context of the database; + var dbContext = dbContexts[dbInfo.name]; + + // ...or create a new context. + if (!dbContext) { + dbContext = createDbContext(); + // Register the new context in the global container. + dbContexts[dbInfo.name] = dbContext; + } + + // Register itself as a running localForage in the current context. + dbContext.forages.push(self); + + // Replace the default `ready()` function with the specialized one. + if (!self._initReady) { + self._initReady = self.ready; + self.ready = _fullyReady; + } + + // Create an array of initialization states of the related localForages. + var initPromises = []; + + function ignoreErrors() { + // Don't handle errors here, + // just makes sure related localForages aren't pending. + return Promise$1.resolve(); + } + + for (var j = 0; j < dbContext.forages.length; j++) { + var forage = dbContext.forages[j]; + if (forage !== self) { + // Don't wait for itself... + initPromises.push(forage._initReady()["catch"](ignoreErrors)); + } + } + + // Take a snapshot of the related localForages. + var forages = dbContext.forages.slice(0); + + // Initialize the connection process only when + // all the related localForages aren't pending. + return Promise$1.all(initPromises).then(function () { + dbInfo.db = dbContext.db; + // Get the connection or open a new one without upgrade. + return _getOriginalConnection(dbInfo); + }).then(function (db) { + dbInfo.db = db; + if (_isUpgradeNeeded(dbInfo, self._defaultConfig.version)) { + // Reopen the database for upgrading. + return _getUpgradedConnection(dbInfo); + } + return db; + }).then(function (db) { + dbInfo.db = dbContext.db = db; + self._dbInfo = dbInfo; + // Share the final connection amongst related localForages. + for (var k = 0; k < forages.length; k++) { + var forage = forages[k]; + if (forage !== self) { + // Self is already up-to-date. + forage._dbInfo.db = dbInfo.db; + forage._dbInfo.version = dbInfo.version; + } + } + }); +} + +function getItem(key, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + createTransaction(self._dbInfo, READ_ONLY, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + var req = store.get(key); + + req.onsuccess = function () { + var value = req.result; + if (value === undefined) { + value = null; + } + if (_isEncodedBlob(value)) { + value = _decodeBlob(value); + } + resolve(value); + }; + + req.onerror = function () { + reject(req.error); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +// Iterate over all items stored in database. +function iterate(iterator, callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + createTransaction(self._dbInfo, READ_ONLY, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + var req = store.openCursor(); + var iterationNumber = 1; + + req.onsuccess = function () { + var cursor = req.result; + + if (cursor) { + var value = cursor.value; + if (_isEncodedBlob(value)) { + value = _decodeBlob(value); + } + var result = iterator(value, cursor.key, iterationNumber++); + + // when the iterator callback retuns any + // (non-`undefined`) value, then we stop + // the iteration immediately + if (result !== void 0) { + resolve(result); + } else { + cursor["continue"](); + } + } else { + resolve(); + } + }; + + req.onerror = function () { + reject(req.error); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + + return promise; +} + +function setItem(key, value, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = new Promise$1(function (resolve, reject) { + var dbInfo; + self.ready().then(function () { + dbInfo = self._dbInfo; + if (toString.call(value) === '[object Blob]') { + return _checkBlobSupport(dbInfo.db).then(function (blobSupport) { + if (blobSupport) { + return value; + } + return _encodeBlob(value); + }); + } + return value; + }).then(function (value) { + createTransaction(self._dbInfo, READ_WRITE, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + + // The reason we don't _save_ null is because IE 10 does + // not support saving the `null` type in IndexedDB. How + // ironic, given the bug below! + // See: https://github.com/mozilla/localForage/issues/161 + if (value === null) { + value = undefined; + } + + var req = store.put(value, key); + + transaction.oncomplete = function () { + // Cast to undefined so the value passed to + // callback/promise is the same as what one would get out + // of `getItem()` later. This leads to some weirdness + // (setItem('foo', undefined) will return `null`), but + // it's not my fault localStorage is our baseline and that + // it's weird. + if (value === undefined) { + value = null; + } + + resolve(value); + }; + transaction.onabort = transaction.onerror = function () { + var err = req.error ? req.error : req.transaction.error; + reject(err); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function removeItem(key, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + createTransaction(self._dbInfo, READ_WRITE, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + // We use a Grunt task to make this safe for IE and some + // versions of Android (including those used by Cordova). + // Normally IE won't like `.delete()` and will insist on + // using `['delete']()`, but we have a build step that + // fixes this for us now. + var req = store["delete"](key); + transaction.oncomplete = function () { + resolve(); + }; + + transaction.onerror = function () { + reject(req.error); + }; + + // The request will be also be aborted if we've exceeded our storage + // space. + transaction.onabort = function () { + var err = req.error ? req.error : req.transaction.error; + reject(err); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function clear(callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + createTransaction(self._dbInfo, READ_WRITE, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + var req = store.clear(); + + transaction.oncomplete = function () { + resolve(); + }; + + transaction.onabort = transaction.onerror = function () { + var err = req.error ? req.error : req.transaction.error; + reject(err); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function length(callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + createTransaction(self._dbInfo, READ_ONLY, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + var req = store.count(); + + req.onsuccess = function () { + resolve(req.result); + }; + + req.onerror = function () { + reject(req.error); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function key(n, callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + if (n < 0) { + resolve(null); + + return; + } + + self.ready().then(function () { + createTransaction(self._dbInfo, READ_ONLY, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + var advanced = false; + var req = store.openCursor(); + + req.onsuccess = function () { + var cursor = req.result; + if (!cursor) { + // this means there weren't enough keys + resolve(null); + + return; + } + + if (n === 0) { + // We have the first key, return it if that's what they + // wanted. + resolve(cursor.key); + } else { + if (!advanced) { + // Otherwise, ask the cursor to skip ahead n + // records. + advanced = true; + cursor.advance(n); + } else { + // When we get here, we've got the nth key. + resolve(cursor.key); + } + } + }; + + req.onerror = function () { + reject(req.error); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function keys(callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + createTransaction(self._dbInfo, READ_ONLY, function (err, transaction) { + if (err) { + return reject(err); + } + + try { + var store = transaction.objectStore(self._dbInfo.storeName); + var req = store.openCursor(); + var keys = []; + + req.onsuccess = function () { + var cursor = req.result; + + if (!cursor) { + resolve(keys); + return; + } + + keys.push(cursor.key); + cursor["continue"](); + }; + + req.onerror = function () { + reject(req.error); + }; + } catch (e) { + reject(e); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function dropInstance(options, callback) { + callback = getCallback.apply(this, arguments); + + var currentConfig = this.config(); + options = typeof options !== 'function' && options || {}; + if (!options.name) { + options.name = options.name || currentConfig.name; + options.storeName = options.storeName || currentConfig.storeName; + } + + var self = this; + var promise; + if (!options.name) { + promise = Promise$1.reject('Invalid arguments'); + } else { + var isCurrentDb = options.name === currentConfig.name && self._dbInfo.db; + + var dbPromise = isCurrentDb ? Promise$1.resolve(self._dbInfo.db) : _getOriginalConnection(options).then(function (db) { + var dbContext = dbContexts[options.name]; + var forages = dbContext.forages; + dbContext.db = db; + for (var i = 0; i < forages.length; i++) { + forages[i]._dbInfo.db = db; + } + return db; + }); + + if (!options.storeName) { + promise = dbPromise.then(function (db) { + _deferReadiness(options); + + var dbContext = dbContexts[options.name]; + var forages = dbContext.forages; + + db.close(); + for (var i = 0; i < forages.length; i++) { + var forage = forages[i]; + forage._dbInfo.db = null; + } + + var dropDBPromise = new Promise$1(function (resolve, reject) { + var req = idb.deleteDatabase(options.name); + + req.onerror = req.onblocked = function (err) { + var db = req.result; + if (db) { + db.close(); + } + reject(err); + }; + + req.onsuccess = function () { + var db = req.result; + if (db) { + db.close(); + } + resolve(db); + }; + }); + + return dropDBPromise.then(function (db) { + dbContext.db = db; + for (var i = 0; i < forages.length; i++) { + var _forage = forages[i]; + _advanceReadiness(_forage._dbInfo); + } + })["catch"](function (err) { + (_rejectReadiness(options, err) || Promise$1.resolve())["catch"](function () {}); + throw err; + }); + }); + } else { + promise = dbPromise.then(function (db) { + if (!db.objectStoreNames.contains(options.storeName)) { + return; + } + + var newVersion = db.version + 1; + + _deferReadiness(options); + + var dbContext = dbContexts[options.name]; + var forages = dbContext.forages; + + db.close(); + for (var i = 0; i < forages.length; i++) { + var forage = forages[i]; + forage._dbInfo.db = null; + forage._dbInfo.version = newVersion; + } + + var dropObjectPromise = new Promise$1(function (resolve, reject) { + var req = idb.open(options.name, newVersion); + + req.onerror = function (err) { + var db = req.result; + db.close(); + reject(err); + }; + + req.onupgradeneeded = function () { + var db = req.result; + db.deleteObjectStore(options.storeName); + }; + + req.onsuccess = function () { + var db = req.result; + db.close(); + resolve(db); + }; + }); + + return dropObjectPromise.then(function (db) { + dbContext.db = db; + for (var j = 0; j < forages.length; j++) { + var _forage2 = forages[j]; + _forage2._dbInfo.db = db; + _advanceReadiness(_forage2._dbInfo); + } + })["catch"](function (err) { + (_rejectReadiness(options, err) || Promise$1.resolve())["catch"](function () {}); + throw err; + }); + }); + } + } + + executeCallback(promise, callback); + return promise; +} + +var asyncStorage = { + _driver: 'asyncStorage', + _initStorage: _initStorage, + _support: isIndexedDBValid(), + iterate: iterate, + getItem: getItem, + setItem: setItem, + removeItem: removeItem, + clear: clear, + length: length, + key: key, + keys: keys, + dropInstance: dropInstance +}; + +function isWebSQLValid() { + return typeof openDatabase === 'function'; +} + +// Sadly, the best way to save binary data in WebSQL/localStorage is serializing +// it to Base64, so this is how we store it to prevent very strange errors with less +// verbose ways of binary <-> string data storage. +var BASE_CHARS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; + +var BLOB_TYPE_PREFIX = '~~local_forage_type~'; +var BLOB_TYPE_PREFIX_REGEX = /^~~local_forage_type~([^~]+)~/; + +var SERIALIZED_MARKER = '__lfsc__:'; +var SERIALIZED_MARKER_LENGTH = SERIALIZED_MARKER.length; + +// OMG the serializations! +var TYPE_ARRAYBUFFER = 'arbf'; +var TYPE_BLOB = 'blob'; +var TYPE_INT8ARRAY = 'si08'; +var TYPE_UINT8ARRAY = 'ui08'; +var TYPE_UINT8CLAMPEDARRAY = 'uic8'; +var TYPE_INT16ARRAY = 'si16'; +var TYPE_INT32ARRAY = 'si32'; +var TYPE_UINT16ARRAY = 'ur16'; +var TYPE_UINT32ARRAY = 'ui32'; +var TYPE_FLOAT32ARRAY = 'fl32'; +var TYPE_FLOAT64ARRAY = 'fl64'; +var TYPE_SERIALIZED_MARKER_LENGTH = SERIALIZED_MARKER_LENGTH + TYPE_ARRAYBUFFER.length; + +var toString$1 = Object.prototype.toString; + +function stringToBuffer(serializedString) { + // Fill the string into a ArrayBuffer. + var bufferLength = serializedString.length * 0.75; + var len = serializedString.length; + var i; + var p = 0; + var encoded1, encoded2, encoded3, encoded4; + + if (serializedString[serializedString.length - 1] === '=') { + bufferLength--; + if (serializedString[serializedString.length - 2] === '=') { + bufferLength--; + } + } + + var buffer = new ArrayBuffer(bufferLength); + var bytes = new Uint8Array(buffer); + + for (i = 0; i < len; i += 4) { + encoded1 = BASE_CHARS.indexOf(serializedString[i]); + encoded2 = BASE_CHARS.indexOf(serializedString[i + 1]); + encoded3 = BASE_CHARS.indexOf(serializedString[i + 2]); + encoded4 = BASE_CHARS.indexOf(serializedString[i + 3]); + + /*jslint bitwise: true */ + bytes[p++] = encoded1 << 2 | encoded2 >> 4; + bytes[p++] = (encoded2 & 15) << 4 | encoded3 >> 2; + bytes[p++] = (encoded3 & 3) << 6 | encoded4 & 63; + } + return buffer; +} + +// Converts a buffer to a string to store, serialized, in the backend +// storage library. +function bufferToString(buffer) { + // base64-arraybuffer + var bytes = new Uint8Array(buffer); + var base64String = ''; + var i; + + for (i = 0; i < bytes.length; i += 3) { + /*jslint bitwise: true */ + base64String += BASE_CHARS[bytes[i] >> 2]; + base64String += BASE_CHARS[(bytes[i] & 3) << 4 | bytes[i + 1] >> 4]; + base64String += BASE_CHARS[(bytes[i + 1] & 15) << 2 | bytes[i + 2] >> 6]; + base64String += BASE_CHARS[bytes[i + 2] & 63]; + } + + if (bytes.length % 3 === 2) { + base64String = base64String.substring(0, base64String.length - 1) + '='; + } else if (bytes.length % 3 === 1) { + base64String = base64String.substring(0, base64String.length - 2) + '=='; + } + + return base64String; +} + +// Serialize a value, afterwards executing a callback (which usually +// instructs the `setItem()` callback/promise to be executed). This is how +// we store binary data with localStorage. +function serialize(value, callback) { + var valueType = ''; + if (value) { + valueType = toString$1.call(value); + } + + // Cannot use `value instanceof ArrayBuffer` or such here, as these + // checks fail when running the tests using casper.js... + // + // TODO: See why those tests fail and use a better solution. + if (value && (valueType === '[object ArrayBuffer]' || value.buffer && toString$1.call(value.buffer) === '[object ArrayBuffer]')) { + // Convert binary arrays to a string and prefix the string with + // a special marker. + var buffer; + var marker = SERIALIZED_MARKER; + + if (value instanceof ArrayBuffer) { + buffer = value; + marker += TYPE_ARRAYBUFFER; + } else { + buffer = value.buffer; + + if (valueType === '[object Int8Array]') { + marker += TYPE_INT8ARRAY; + } else if (valueType === '[object Uint8Array]') { + marker += TYPE_UINT8ARRAY; + } else if (valueType === '[object Uint8ClampedArray]') { + marker += TYPE_UINT8CLAMPEDARRAY; + } else if (valueType === '[object Int16Array]') { + marker += TYPE_INT16ARRAY; + } else if (valueType === '[object Uint16Array]') { + marker += TYPE_UINT16ARRAY; + } else if (valueType === '[object Int32Array]') { + marker += TYPE_INT32ARRAY; + } else if (valueType === '[object Uint32Array]') { + marker += TYPE_UINT32ARRAY; + } else if (valueType === '[object Float32Array]') { + marker += TYPE_FLOAT32ARRAY; + } else if (valueType === '[object Float64Array]') { + marker += TYPE_FLOAT64ARRAY; + } else { + callback(new Error('Failed to get type for BinaryArray')); + } + } + + callback(marker + bufferToString(buffer)); + } else if (valueType === '[object Blob]') { + // Conver the blob to a binaryArray and then to a string. + var fileReader = new FileReader(); + + fileReader.onload = function () { + // Backwards-compatible prefix for the blob type. + var str = BLOB_TYPE_PREFIX + value.type + '~' + bufferToString(this.result); + + callback(SERIALIZED_MARKER + TYPE_BLOB + str); + }; + + fileReader.readAsArrayBuffer(value); + } else { + try { + callback(JSON.stringify(value)); + } catch (e) { + console.error("Couldn't convert value into a JSON string: ", value); + + callback(null, e); + } + } +} + +// Deserialize data we've inserted into a value column/field. We place +// special markers into our strings to mark them as encoded; this isn't +// as nice as a meta field, but it's the only sane thing we can do whilst +// keeping localStorage support intact. +// +// Oftentimes this will just deserialize JSON content, but if we have a +// special marker (SERIALIZED_MARKER, defined above), we will extract +// some kind of arraybuffer/binary data/typed array out of the string. +function deserialize(value) { + // If we haven't marked this string as being specially serialized (i.e. + // something other than serialized JSON), we can just return it and be + // done with it. + if (value.substring(0, SERIALIZED_MARKER_LENGTH) !== SERIALIZED_MARKER) { + return JSON.parse(value); + } + + // The following code deals with deserializing some kind of Blob or + // TypedArray. First we separate out the type of data we're dealing + // with from the data itself. + var serializedString = value.substring(TYPE_SERIALIZED_MARKER_LENGTH); + var type = value.substring(SERIALIZED_MARKER_LENGTH, TYPE_SERIALIZED_MARKER_LENGTH); + + var blobType; + // Backwards-compatible blob type serialization strategy. + // DBs created with older versions of localForage will simply not have the blob type. + if (type === TYPE_BLOB && BLOB_TYPE_PREFIX_REGEX.test(serializedString)) { + var matcher = serializedString.match(BLOB_TYPE_PREFIX_REGEX); + blobType = matcher[1]; + serializedString = serializedString.substring(matcher[0].length); + } + var buffer = stringToBuffer(serializedString); + + // Return the right type based on the code/type set during + // serialization. + switch (type) { + case TYPE_ARRAYBUFFER: + return buffer; + case TYPE_BLOB: + return createBlob([buffer], { type: blobType }); + case TYPE_INT8ARRAY: + return new Int8Array(buffer); + case TYPE_UINT8ARRAY: + return new Uint8Array(buffer); + case TYPE_UINT8CLAMPEDARRAY: + return new Uint8ClampedArray(buffer); + case TYPE_INT16ARRAY: + return new Int16Array(buffer); + case TYPE_UINT16ARRAY: + return new Uint16Array(buffer); + case TYPE_INT32ARRAY: + return new Int32Array(buffer); + case TYPE_UINT32ARRAY: + return new Uint32Array(buffer); + case TYPE_FLOAT32ARRAY: + return new Float32Array(buffer); + case TYPE_FLOAT64ARRAY: + return new Float64Array(buffer); + default: + throw new Error('Unkown type: ' + type); + } +} + +var localforageSerializer = { + serialize: serialize, + deserialize: deserialize, + stringToBuffer: stringToBuffer, + bufferToString: bufferToString +}; + +/* + * Includes code from: + * + * base64-arraybuffer + * https://github.com/niklasvh/base64-arraybuffer + * + * Copyright (c) 2012 Niklas von Hertzen + * Licensed under the MIT license. + */ + +function createDbTable(t, dbInfo, callback, errorCallback) { + t.executeSql('CREATE TABLE IF NOT EXISTS ' + dbInfo.storeName + ' ' + '(id INTEGER PRIMARY KEY, key unique, value)', [], callback, errorCallback); +} + +// Open the WebSQL database (automatically creates one if one didn't +// previously exist), using any options set in the config. +function _initStorage$1(options) { + var self = this; + var dbInfo = { + db: null + }; + + if (options) { + for (var i in options) { + dbInfo[i] = typeof options[i] !== 'string' ? options[i].toString() : options[i]; + } + } + + var dbInfoPromise = new Promise$1(function (resolve, reject) { + // Open the database; the openDatabase API will automatically + // create it for us if it doesn't exist. + try { + dbInfo.db = openDatabase(dbInfo.name, String(dbInfo.version), dbInfo.description, dbInfo.size); + } catch (e) { + return reject(e); + } + + // Create our key/value table if it doesn't exist. + dbInfo.db.transaction(function (t) { + createDbTable(t, dbInfo, function () { + self._dbInfo = dbInfo; + resolve(); + }, function (t, error) { + reject(error); + }); + }, reject); + }); + + dbInfo.serializer = localforageSerializer; + return dbInfoPromise; +} + +function tryExecuteSql(t, dbInfo, sqlStatement, args, callback, errorCallback) { + t.executeSql(sqlStatement, args, callback, function (t, error) { + if (error.code === error.SYNTAX_ERR) { + t.executeSql('SELECT name FROM sqlite_master ' + "WHERE type='table' AND name = ?", [dbInfo.storeName], function (t, results) { + if (!results.rows.length) { + // if the table is missing (was deleted) + // re-create it table and retry + createDbTable(t, dbInfo, function () { + t.executeSql(sqlStatement, args, callback, errorCallback); + }, errorCallback); + } else { + errorCallback(t, error); + } + }, errorCallback); + } else { + errorCallback(t, error); + } + }, errorCallback); +} + +function getItem$1(key, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + var dbInfo = self._dbInfo; + dbInfo.db.transaction(function (t) { + tryExecuteSql(t, dbInfo, 'SELECT * FROM ' + dbInfo.storeName + ' WHERE key = ? LIMIT 1', [key], function (t, results) { + var result = results.rows.length ? results.rows.item(0).value : null; + + // Check to see if this is serialized content we need to + // unpack. + if (result) { + result = dbInfo.serializer.deserialize(result); + } + + resolve(result); + }, function (t, error) { + reject(error); + }); + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function iterate$1(iterator, callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + var dbInfo = self._dbInfo; + + dbInfo.db.transaction(function (t) { + tryExecuteSql(t, dbInfo, 'SELECT * FROM ' + dbInfo.storeName, [], function (t, results) { + var rows = results.rows; + var length = rows.length; + + for (var i = 0; i < length; i++) { + var item = rows.item(i); + var result = item.value; + + // Check to see if this is serialized content + // we need to unpack. + if (result) { + result = dbInfo.serializer.deserialize(result); + } + + result = iterator(result, item.key, i + 1); + + // void(0) prevents problems with redefinition + // of `undefined`. + if (result !== void 0) { + resolve(result); + return; + } + } + + resolve(); + }, function (t, error) { + reject(error); + }); + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function _setItem(key, value, callback, retriesLeft) { + var self = this; + + key = normalizeKey(key); + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + // The localStorage API doesn't return undefined values in an + // "expected" way, so undefined is always cast to null in all + // drivers. See: https://github.com/mozilla/localForage/pull/42 + if (value === undefined) { + value = null; + } + + // Save the original value to pass to the callback. + var originalValue = value; + + var dbInfo = self._dbInfo; + dbInfo.serializer.serialize(value, function (value, error) { + if (error) { + reject(error); + } else { + dbInfo.db.transaction(function (t) { + tryExecuteSql(t, dbInfo, 'INSERT OR REPLACE INTO ' + dbInfo.storeName + ' ' + '(key, value) VALUES (?, ?)', [key, value], function () { + resolve(originalValue); + }, function (t, error) { + reject(error); + }); + }, function (sqlError) { + // The transaction failed; check + // to see if it's a quota error. + if (sqlError.code === sqlError.QUOTA_ERR) { + // We reject the callback outright for now, but + // it's worth trying to re-run the transaction. + // Even if the user accepts the prompt to use + // more storage on Safari, this error will + // be called. + // + // Try to re-run the transaction. + if (retriesLeft > 0) { + resolve(_setItem.apply(self, [key, originalValue, callback, retriesLeft - 1])); + return; + } + reject(sqlError); + } + }); + } + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function setItem$1(key, value, callback) { + return _setItem.apply(this, [key, value, callback, 1]); +} + +function removeItem$1(key, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + var dbInfo = self._dbInfo; + dbInfo.db.transaction(function (t) { + tryExecuteSql(t, dbInfo, 'DELETE FROM ' + dbInfo.storeName + ' WHERE key = ?', [key], function () { + resolve(); + }, function (t, error) { + reject(error); + }); + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +// Deletes every item in the table. +// TODO: Find out if this resets the AUTO_INCREMENT number. +function clear$1(callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + var dbInfo = self._dbInfo; + dbInfo.db.transaction(function (t) { + tryExecuteSql(t, dbInfo, 'DELETE FROM ' + dbInfo.storeName, [], function () { + resolve(); + }, function (t, error) { + reject(error); + }); + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +// Does a simple `COUNT(key)` to get the number of items stored in +// localForage. +function length$1(callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + var dbInfo = self._dbInfo; + dbInfo.db.transaction(function (t) { + // Ahhh, SQL makes this one soooooo easy. + tryExecuteSql(t, dbInfo, 'SELECT COUNT(key) as c FROM ' + dbInfo.storeName, [], function (t, results) { + var result = results.rows.item(0).c; + resolve(result); + }, function (t, error) { + reject(error); + }); + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +// Return the key located at key index X; essentially gets the key from a +// `WHERE id = ?`. This is the most efficient way I can think to implement +// this rarely-used (in my experience) part of the API, but it can seem +// inconsistent, because we do `INSERT OR REPLACE INTO` on `setItem()`, so +// the ID of each key will change every time it's updated. Perhaps a stored +// procedure for the `setItem()` SQL would solve this problem? +// TODO: Don't change ID on `setItem()`. +function key$1(n, callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + var dbInfo = self._dbInfo; + dbInfo.db.transaction(function (t) { + tryExecuteSql(t, dbInfo, 'SELECT key FROM ' + dbInfo.storeName + ' WHERE id = ? LIMIT 1', [n + 1], function (t, results) { + var result = results.rows.length ? results.rows.item(0).key : null; + resolve(result); + }, function (t, error) { + reject(error); + }); + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +function keys$1(callback) { + var self = this; + + var promise = new Promise$1(function (resolve, reject) { + self.ready().then(function () { + var dbInfo = self._dbInfo; + dbInfo.db.transaction(function (t) { + tryExecuteSql(t, dbInfo, 'SELECT key FROM ' + dbInfo.storeName, [], function (t, results) { + var keys = []; + + for (var i = 0; i < results.rows.length; i++) { + keys.push(results.rows.item(i).key); + } + + resolve(keys); + }, function (t, error) { + reject(error); + }); + }); + })["catch"](reject); + }); + + executeCallback(promise, callback); + return promise; +} + +// https://www.w3.org/TR/webdatabase/#databases +// > There is no way to enumerate or delete the databases available for an origin from this API. +function getAllStoreNames(db) { + return new Promise$1(function (resolve, reject) { + db.transaction(function (t) { + t.executeSql('SELECT name FROM sqlite_master ' + "WHERE type='table' AND name <> '__WebKitDatabaseInfoTable__'", [], function (t, results) { + var storeNames = []; + + for (var i = 0; i < results.rows.length; i++) { + storeNames.push(results.rows.item(i).name); + } + + resolve({ + db: db, + storeNames: storeNames + }); + }, function (t, error) { + reject(error); + }); + }, function (sqlError) { + reject(sqlError); + }); + }); +} + +function dropInstance$1(options, callback) { + callback = getCallback.apply(this, arguments); + + var currentConfig = this.config(); + options = typeof options !== 'function' && options || {}; + if (!options.name) { + options.name = options.name || currentConfig.name; + options.storeName = options.storeName || currentConfig.storeName; + } + + var self = this; + var promise; + if (!options.name) { + promise = Promise$1.reject('Invalid arguments'); + } else { + promise = new Promise$1(function (resolve) { + var db; + if (options.name === currentConfig.name) { + // use the db reference of the current instance + db = self._dbInfo.db; + } else { + db = openDatabase(options.name, '', '', 0); + } + + if (!options.storeName) { + // drop all database tables + resolve(getAllStoreNames(db)); + } else { + resolve({ + db: db, + storeNames: [options.storeName] + }); + } + }).then(function (operationInfo) { + return new Promise$1(function (resolve, reject) { + operationInfo.db.transaction(function (t) { + function dropTable(storeName) { + return new Promise$1(function (resolve, reject) { + t.executeSql('DROP TABLE IF EXISTS ' + storeName, [], function () { + resolve(); + }, function (t, error) { + reject(error); + }); + }); + } + + var operations = []; + for (var i = 0, len = operationInfo.storeNames.length; i < len; i++) { + operations.push(dropTable(operationInfo.storeNames[i])); + } + + Promise$1.all(operations).then(function () { + resolve(); + })["catch"](function (e) { + reject(e); + }); + }, function (sqlError) { + reject(sqlError); + }); + }); + }); + } + + executeCallback(promise, callback); + return promise; +} + +var webSQLStorage = { + _driver: 'webSQLStorage', + _initStorage: _initStorage$1, + _support: isWebSQLValid(), + iterate: iterate$1, + getItem: getItem$1, + setItem: setItem$1, + removeItem: removeItem$1, + clear: clear$1, + length: length$1, + key: key$1, + keys: keys$1, + dropInstance: dropInstance$1 +}; + +function isLocalStorageValid() { + try { + return typeof localStorage !== 'undefined' && 'setItem' in localStorage && + // in IE8 typeof localStorage.setItem === 'object' + !!localStorage.setItem; + } catch (e) { + return false; + } +} + +function _getKeyPrefix(options, defaultConfig) { + var keyPrefix = options.name + '/'; + + if (options.storeName !== defaultConfig.storeName) { + keyPrefix += options.storeName + '/'; + } + return keyPrefix; +} + +// Check if localStorage throws when saving an item +function checkIfLocalStorageThrows() { + var localStorageTestKey = '_localforage_support_test'; + + try { + localStorage.setItem(localStorageTestKey, true); + localStorage.removeItem(localStorageTestKey); + + return false; + } catch (e) { + return true; + } +} + +// Check if localStorage is usable and allows to save an item +// This method checks if localStorage is usable in Safari Private Browsing +// mode, or in any other case where the available quota for localStorage +// is 0 and there wasn't any saved items yet. +function _isLocalStorageUsable() { + return !checkIfLocalStorageThrows() || localStorage.length > 0; +} + +// Config the localStorage backend, using options set in the config. +function _initStorage$2(options) { + var self = this; + var dbInfo = {}; + if (options) { + for (var i in options) { + dbInfo[i] = options[i]; + } + } + + dbInfo.keyPrefix = _getKeyPrefix(options, self._defaultConfig); + + if (!_isLocalStorageUsable()) { + return Promise$1.reject(); + } + + self._dbInfo = dbInfo; + dbInfo.serializer = localforageSerializer; + + return Promise$1.resolve(); +} + +// Remove all keys from the datastore, effectively destroying all data in +// the app's key/value store! +function clear$2(callback) { + var self = this; + var promise = self.ready().then(function () { + var keyPrefix = self._dbInfo.keyPrefix; + + for (var i = localStorage.length - 1; i >= 0; i--) { + var key = localStorage.key(i); + + if (key.indexOf(keyPrefix) === 0) { + localStorage.removeItem(key); + } + } + }); + + executeCallback(promise, callback); + return promise; +} + +// Retrieve an item from the store. Unlike the original async_storage +// library in Gaia, we don't modify return values at all. If a key's value +// is `undefined`, we pass that value to the callback function. +function getItem$2(key, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = self.ready().then(function () { + var dbInfo = self._dbInfo; + var result = localStorage.getItem(dbInfo.keyPrefix + key); + + // If a result was found, parse it from the serialized + // string into a JS object. If result isn't truthy, the key + // is likely undefined and we'll pass it straight to the + // callback. + if (result) { + result = dbInfo.serializer.deserialize(result); + } + + return result; + }); + + executeCallback(promise, callback); + return promise; +} + +// Iterate over all items in the store. +function iterate$2(iterator, callback) { + var self = this; + + var promise = self.ready().then(function () { + var dbInfo = self._dbInfo; + var keyPrefix = dbInfo.keyPrefix; + var keyPrefixLength = keyPrefix.length; + var length = localStorage.length; + + // We use a dedicated iterator instead of the `i` variable below + // so other keys we fetch in localStorage aren't counted in + // the `iterationNumber` argument passed to the `iterate()` + // callback. + // + // See: github.com/mozilla/localForage/pull/435#discussion_r38061530 + var iterationNumber = 1; + + for (var i = 0; i < length; i++) { + var key = localStorage.key(i); + if (key.indexOf(keyPrefix) !== 0) { + continue; + } + var value = localStorage.getItem(key); + + // If a result was found, parse it from the serialized + // string into a JS object. If result isn't truthy, the + // key is likely undefined and we'll pass it straight + // to the iterator. + if (value) { + value = dbInfo.serializer.deserialize(value); + } + + value = iterator(value, key.substring(keyPrefixLength), iterationNumber++); + + if (value !== void 0) { + return value; + } + } + }); + + executeCallback(promise, callback); + return promise; +} + +// Same as localStorage's key() method, except takes a callback. +function key$2(n, callback) { + var self = this; + var promise = self.ready().then(function () { + var dbInfo = self._dbInfo; + var result; + try { + result = localStorage.key(n); + } catch (error) { + result = null; + } + + // Remove the prefix from the key, if a key is found. + if (result) { + result = result.substring(dbInfo.keyPrefix.length); + } + + return result; + }); + + executeCallback(promise, callback); + return promise; +} + +function keys$2(callback) { + var self = this; + var promise = self.ready().then(function () { + var dbInfo = self._dbInfo; + var length = localStorage.length; + var keys = []; + + for (var i = 0; i < length; i++) { + var itemKey = localStorage.key(i); + if (itemKey.indexOf(dbInfo.keyPrefix) === 0) { + keys.push(itemKey.substring(dbInfo.keyPrefix.length)); + } + } + + return keys; + }); + + executeCallback(promise, callback); + return promise; +} + +// Supply the number of keys in the datastore to the callback function. +function length$2(callback) { + var self = this; + var promise = self.keys().then(function (keys) { + return keys.length; + }); + + executeCallback(promise, callback); + return promise; +} + +// Remove an item from the store, nice and simple. +function removeItem$2(key, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = self.ready().then(function () { + var dbInfo = self._dbInfo; + localStorage.removeItem(dbInfo.keyPrefix + key); + }); + + executeCallback(promise, callback); + return promise; +} + +// Set a key's value and run an optional callback once the value is set. +// Unlike Gaia's implementation, the callback function is passed the value, +// in case you want to operate on that value only after you're sure it +// saved, or something like that. +function setItem$2(key, value, callback) { + var self = this; + + key = normalizeKey(key); + + var promise = self.ready().then(function () { + // Convert undefined values to null. + // https://github.com/mozilla/localForage/pull/42 + if (value === undefined) { + value = null; + } + + // Save the original value to pass to the callback. + var originalValue = value; + + return new Promise$1(function (resolve, reject) { + var dbInfo = self._dbInfo; + dbInfo.serializer.serialize(value, function (value, error) { + if (error) { + reject(error); + } else { + try { + localStorage.setItem(dbInfo.keyPrefix + key, value); + resolve(originalValue); + } catch (e) { + // localStorage capacity exceeded. + // TODO: Make this a specific error/event. + if (e.name === 'QuotaExceededError' || e.name === 'NS_ERROR_DOM_QUOTA_REACHED') { + reject(e); + } + reject(e); + } + } + }); + }); + }); + + executeCallback(promise, callback); + return promise; +} + +function dropInstance$2(options, callback) { + callback = getCallback.apply(this, arguments); + + options = typeof options !== 'function' && options || {}; + if (!options.name) { + var currentConfig = this.config(); + options.name = options.name || currentConfig.name; + options.storeName = options.storeName || currentConfig.storeName; + } + + var self = this; + var promise; + if (!options.name) { + promise = Promise$1.reject('Invalid arguments'); + } else { + promise = new Promise$1(function (resolve) { + if (!options.storeName) { + resolve(options.name + '/'); + } else { + resolve(_getKeyPrefix(options, self._defaultConfig)); + } + }).then(function (keyPrefix) { + for (var i = localStorage.length - 1; i >= 0; i--) { + var key = localStorage.key(i); + + if (key.indexOf(keyPrefix) === 0) { + localStorage.removeItem(key); + } + } + }); + } + + executeCallback(promise, callback); + return promise; +} + +var localStorageWrapper = { + _driver: 'localStorageWrapper', + _initStorage: _initStorage$2, + _support: isLocalStorageValid(), + iterate: iterate$2, + getItem: getItem$2, + setItem: setItem$2, + removeItem: removeItem$2, + clear: clear$2, + length: length$2, + key: key$2, + keys: keys$2, + dropInstance: dropInstance$2 +}; + +var sameValue = function sameValue(x, y) { + return x === y || typeof x === 'number' && typeof y === 'number' && isNaN(x) && isNaN(y); +}; + +var includes = function includes(array, searchElement) { + var len = array.length; + var i = 0; + while (i < len) { + if (sameValue(array[i], searchElement)) { + return true; + } + i++; + } + + return false; +}; + +var isArray = Array.isArray || function (arg) { + return Object.prototype.toString.call(arg) === '[object Array]'; +}; + +// Drivers are stored here when `defineDriver()` is called. +// They are shared across all instances of localForage. +var DefinedDrivers = {}; + +var DriverSupport = {}; + +var DefaultDrivers = { + INDEXEDDB: asyncStorage, + WEBSQL: webSQLStorage, + LOCALSTORAGE: localStorageWrapper +}; + +var DefaultDriverOrder = [DefaultDrivers.INDEXEDDB._driver, DefaultDrivers.WEBSQL._driver, DefaultDrivers.LOCALSTORAGE._driver]; + +var OptionalDriverMethods = ['dropInstance']; + +var LibraryMethods = ['clear', 'getItem', 'iterate', 'key', 'keys', 'length', 'removeItem', 'setItem'].concat(OptionalDriverMethods); + +var DefaultConfig = { + description: '', + driver: DefaultDriverOrder.slice(), + name: 'localforage', + // Default DB size is _JUST UNDER_ 5MB, as it's the highest size + // we can use without a prompt. + size: 4980736, + storeName: 'keyvaluepairs', + version: 1.0 +}; + +function callWhenReady(localForageInstance, libraryMethod) { + localForageInstance[libraryMethod] = function () { + var _args = arguments; + return localForageInstance.ready().then(function () { + return localForageInstance[libraryMethod].apply(localForageInstance, _args); + }); + }; +} + +function extend() { + for (var i = 1; i < arguments.length; i++) { + var arg = arguments[i]; + + if (arg) { + for (var _key in arg) { + if (arg.hasOwnProperty(_key)) { + if (isArray(arg[_key])) { + arguments[0][_key] = arg[_key].slice(); + } else { + arguments[0][_key] = arg[_key]; + } + } + } + } + } + + return arguments[0]; +} + +var LocalForage = function () { + function LocalForage(options) { + _classCallCheck(this, LocalForage); + + for (var driverTypeKey in DefaultDrivers) { + if (DefaultDrivers.hasOwnProperty(driverTypeKey)) { + var driver = DefaultDrivers[driverTypeKey]; + var driverName = driver._driver; + this[driverTypeKey] = driverName; + + if (!DefinedDrivers[driverName]) { + // we don't need to wait for the promise, + // since the default drivers can be defined + // in a blocking manner + this.defineDriver(driver); + } + } + } + + this._defaultConfig = extend({}, DefaultConfig); + this._config = extend({}, this._defaultConfig, options); + this._driverSet = null; + this._initDriver = null; + this._ready = false; + this._dbInfo = null; + + this._wrapLibraryMethodsWithReady(); + this.setDriver(this._config.driver)["catch"](function () {}); + } + + // Set any config values for localForage; can be called anytime before + // the first API call (e.g. `getItem`, `setItem`). + // We loop through options so we don't overwrite existing config + // values. + + + LocalForage.prototype.config = function config(options) { + // If the options argument is an object, we use it to set values. + // Otherwise, we return either a specified config value or all + // config values. + if ((typeof options === 'undefined' ? 'undefined' : _typeof(options)) === 'object') { + // If localforage is ready and fully initialized, we can't set + // any new configuration values. Instead, we return an error. + if (this._ready) { + return new Error("Can't call config() after localforage " + 'has been used.'); + } + + for (var i in options) { + if (i === 'storeName') { + options[i] = options[i].replace(/\W/g, '_'); + } + + if (i === 'version' && typeof options[i] !== 'number') { + return new Error('Database version must be a number.'); + } + + this._config[i] = options[i]; + } + + // after all config options are set and + // the driver option is used, try setting it + if ('driver' in options && options.driver) { + return this.setDriver(this._config.driver); + } + + return true; + } else if (typeof options === 'string') { + return this._config[options]; + } else { + return this._config; + } + }; + + // Used to define a custom driver, shared across all instances of + // localForage. + + + LocalForage.prototype.defineDriver = function defineDriver(driverObject, callback, errorCallback) { + var promise = new Promise$1(function (resolve, reject) { + try { + var driverName = driverObject._driver; + var complianceError = new Error('Custom driver not compliant; see ' + 'https://mozilla.github.io/localForage/#definedriver'); + + // A driver name should be defined and not overlap with the + // library-defined, default drivers. + if (!driverObject._driver) { + reject(complianceError); + return; + } + + var driverMethods = LibraryMethods.concat('_initStorage'); + for (var i = 0, len = driverMethods.length; i < len; i++) { + var driverMethodName = driverMethods[i]; + + // when the property is there, + // it should be a method even when optional + var isRequired = !includes(OptionalDriverMethods, driverMethodName); + if ((isRequired || driverObject[driverMethodName]) && typeof driverObject[driverMethodName] !== 'function') { + reject(complianceError); + return; + } + } + + var configureMissingMethods = function configureMissingMethods() { + var methodNotImplementedFactory = function methodNotImplementedFactory(methodName) { + return function () { + var error = new Error('Method ' + methodName + ' is not implemented by the current driver'); + var promise = Promise$1.reject(error); + executeCallback(promise, arguments[arguments.length - 1]); + return promise; + }; + }; + + for (var _i = 0, _len = OptionalDriverMethods.length; _i < _len; _i++) { + var optionalDriverMethod = OptionalDriverMethods[_i]; + if (!driverObject[optionalDriverMethod]) { + driverObject[optionalDriverMethod] = methodNotImplementedFactory(optionalDriverMethod); + } + } + }; + + configureMissingMethods(); + + var setDriverSupport = function setDriverSupport(support) { + if (DefinedDrivers[driverName]) { + console.info('Redefining LocalForage driver: ' + driverName); + } + DefinedDrivers[driverName] = driverObject; + DriverSupport[driverName] = support; + // don't use a then, so that we can define + // drivers that have simple _support methods + // in a blocking manner + resolve(); + }; + + if ('_support' in driverObject) { + if (driverObject._support && typeof driverObject._support === 'function') { + driverObject._support().then(setDriverSupport, reject); + } else { + setDriverSupport(!!driverObject._support); + } + } else { + setDriverSupport(true); + } + } catch (e) { + reject(e); + } + }); + + executeTwoCallbacks(promise, callback, errorCallback); + return promise; + }; + + LocalForage.prototype.driver = function driver() { + return this._driver || null; + }; + + LocalForage.prototype.getDriver = function getDriver(driverName, callback, errorCallback) { + var getDriverPromise = DefinedDrivers[driverName] ? Promise$1.resolve(DefinedDrivers[driverName]) : Promise$1.reject(new Error('Driver not found.')); + + executeTwoCallbacks(getDriverPromise, callback, errorCallback); + return getDriverPromise; + }; + + LocalForage.prototype.getSerializer = function getSerializer(callback) { + var serializerPromise = Promise$1.resolve(localforageSerializer); + executeTwoCallbacks(serializerPromise, callback); + return serializerPromise; + }; + + LocalForage.prototype.ready = function ready(callback) { + var self = this; + + var promise = self._driverSet.then(function () { + if (self._ready === null) { + self._ready = self._initDriver(); + } + + return self._ready; + }); + + executeTwoCallbacks(promise, callback, callback); + return promise; + }; + + LocalForage.prototype.setDriver = function setDriver(drivers, callback, errorCallback) { + var self = this; + + if (!isArray(drivers)) { + drivers = [drivers]; + } + + var supportedDrivers = this._getSupportedDrivers(drivers); + + function setDriverToConfig() { + self._config.driver = self.driver(); + } + + function extendSelfWithDriver(driver) { + self._extend(driver); + setDriverToConfig(); + + self._ready = self._initStorage(self._config); + return self._ready; + } + + function initDriver(supportedDrivers) { + return function () { + var currentDriverIndex = 0; + + function driverPromiseLoop() { + while (currentDriverIndex < supportedDrivers.length) { + var driverName = supportedDrivers[currentDriverIndex]; + currentDriverIndex++; + + self._dbInfo = null; + self._ready = null; + + return self.getDriver(driverName).then(extendSelfWithDriver)["catch"](driverPromiseLoop); + } + + setDriverToConfig(); + var error = new Error('No available storage method found.'); + self._driverSet = Promise$1.reject(error); + return self._driverSet; + } + + return driverPromiseLoop(); + }; + } + + // There might be a driver initialization in progress + // so wait for it to finish in order to avoid a possible + // race condition to set _dbInfo + var oldDriverSetDone = this._driverSet !== null ? this._driverSet["catch"](function () { + return Promise$1.resolve(); + }) : Promise$1.resolve(); + + this._driverSet = oldDriverSetDone.then(function () { + var driverName = supportedDrivers[0]; + self._dbInfo = null; + self._ready = null; + + return self.getDriver(driverName).then(function (driver) { + self._driver = driver._driver; + setDriverToConfig(); + self._wrapLibraryMethodsWithReady(); + self._initDriver = initDriver(supportedDrivers); + }); + })["catch"](function () { + setDriverToConfig(); + var error = new Error('No available storage method found.'); + self._driverSet = Promise$1.reject(error); + return self._driverSet; + }); + + executeTwoCallbacks(this._driverSet, callback, errorCallback); + return this._driverSet; + }; + + LocalForage.prototype.supports = function supports(driverName) { + return !!DriverSupport[driverName]; + }; + + LocalForage.prototype._extend = function _extend(libraryMethodsAndProperties) { + extend(this, libraryMethodsAndProperties); + }; + + LocalForage.prototype._getSupportedDrivers = function _getSupportedDrivers(drivers) { + var supportedDrivers = []; + for (var i = 0, len = drivers.length; i < len; i++) { + var driverName = drivers[i]; + if (this.supports(driverName)) { + supportedDrivers.push(driverName); + } + } + return supportedDrivers; + }; + + LocalForage.prototype._wrapLibraryMethodsWithReady = function _wrapLibraryMethodsWithReady() { + // Add a stub for each driver API method that delays the call to the + // corresponding driver method until localForage is ready. These stubs + // will be replaced by the driver methods as soon as the driver is + // loaded, so there is no performance impact. + for (var i = 0, len = LibraryMethods.length; i < len; i++) { + callWhenReady(this, LibraryMethods[i]); + } + }; + + LocalForage.prototype.createInstance = function createInstance(options) { + return new LocalForage(options); + }; + + return LocalForage; +}(); + +// The actual localForage object that we expose as a module or via a +// global. It's extended by pulling in one of our other libraries. + + +var localforage_js = new LocalForage(); + +module.exports = localforage_js; + +},{"3":3}]},{},[4])(4) +}); + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(27))) + +/***/ }), +/* 33 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* WEBPACK VAR INJECTION */(function(global, process) {/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "b", function() { return getGlobalNamespace; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return getGlobal; }); +/** + * @license + * Copyright 2020 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +// Note that the identifier globalNameSpace is scoped to this module, but will +// always resolve to the same global object regardless of how the module is +// resolved. +// tslint:disable-next-line:no-any +let globalNameSpace; +// tslint:disable-next-line:no-any +function getGlobalNamespace() { + if (globalNameSpace == null) { + // tslint:disable-next-line:no-any + let ns; + if (typeof (window) !== 'undefined') { + ns = window; + } + else if (typeof (global) !== 'undefined') { + ns = global; + } + else if (typeof (process) !== 'undefined') { + ns = process; + } + else if (typeof (self) !== 'undefined') { + ns = self; + } + else { + throw new Error('Could not find a global object'); + } + globalNameSpace = ns; + } + return globalNameSpace; +} +// tslint:disable-next-line:no-any +function getGlobalMap() { + const ns = getGlobalNamespace(); + if (ns._tfGlobals == null) { + ns._tfGlobals = new Map(); + } + return ns._tfGlobals; +} +/** + * Returns a globally accessible 'singleton' object. + * + * @param key the name of the object + * @param init a function to initialize to initialize this object + * the first time it is fetched. + */ +function getGlobal(key, init) { + const globalMap = getGlobalMap(); + if (globalMap.has(key)) { + return globalMap.get(key); + } + else { + const singleton = init(); + globalMap.set(key, singleton); + return globalMap.get(key); + } +} +//# sourceMappingURL=global_util.js.map +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(27), __webpack_require__(35))) + +/***/ }), +/* 34 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return DTYPE_VALUE_SIZE_MAP; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +/* Type definitions for exporting and importing of models. */ +/** + * A map from Tensor dtype to number of bytes per element of the Tensor. + */ +const DTYPE_VALUE_SIZE_MAP = { + 'float32': 4, + 'float16': 2, + 'int32': 4, + 'uint16': 2, + 'uint8': 1, + 'bool': 1, + 'complex64': 8 +}; +//# sourceMappingURL=types.js.map + +/***/ }), +/* 35 */ +/***/ (function(module, exports) { + +// shim for using process in browser +var process = module.exports = {}; + +// cached from whatever global is present so that test runners that stub it +// don't break things. But we need to wrap it in a try catch in case it is +// wrapped in strict mode code which doesn't define any globals. It's inside a +// function because try/catches deoptimize in certain engines. + +var cachedSetTimeout; +var cachedClearTimeout; + +function defaultSetTimout() { + throw new Error('setTimeout has not been defined'); +} +function defaultClearTimeout () { + throw new Error('clearTimeout has not been defined'); +} +(function () { + try { + if (typeof setTimeout === 'function') { + cachedSetTimeout = setTimeout; + } else { + cachedSetTimeout = defaultSetTimout; + } + } catch (e) { + cachedSetTimeout = defaultSetTimout; + } + try { + if (typeof clearTimeout === 'function') { + cachedClearTimeout = clearTimeout; + } else { + cachedClearTimeout = defaultClearTimeout; + } + } catch (e) { + cachedClearTimeout = defaultClearTimeout; + } +} ()) +function runTimeout(fun) { + if (cachedSetTimeout === setTimeout) { + //normal enviroments in sane situations + return setTimeout(fun, 0); + } + // if setTimeout wasn't available but was latter defined + if ((cachedSetTimeout === defaultSetTimout || !cachedSetTimeout) && setTimeout) { + cachedSetTimeout = setTimeout; + return setTimeout(fun, 0); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedSetTimeout(fun, 0); + } catch(e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedSetTimeout.call(null, fun, 0); + } catch(e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error + return cachedSetTimeout.call(this, fun, 0); + } + } + + +} +function runClearTimeout(marker) { + if (cachedClearTimeout === clearTimeout) { + //normal enviroments in sane situations + return clearTimeout(marker); + } + // if clearTimeout wasn't available but was latter defined + if ((cachedClearTimeout === defaultClearTimeout || !cachedClearTimeout) && clearTimeout) { + cachedClearTimeout = clearTimeout; + return clearTimeout(marker); + } + try { + // when when somebody has screwed with setTimeout but no I.E. maddness + return cachedClearTimeout(marker); + } catch (e){ + try { + // When we are in I.E. but the script has been evaled so I.E. doesn't trust the global object when called normally + return cachedClearTimeout.call(null, marker); + } catch (e){ + // same as above but when it's a version of I.E. that must have the global object for 'this', hopfully our context correct otherwise it will throw a global error. + // Some versions of I.E. have different rules for clearTimeout vs setTimeout + return cachedClearTimeout.call(this, marker); + } + } + + + +} +var queue = []; +var draining = false; +var currentQueue; +var queueIndex = -1; + +function cleanUpNextTick() { + if (!draining || !currentQueue) { + return; + } + draining = false; + if (currentQueue.length) { + queue = currentQueue.concat(queue); + } else { + queueIndex = -1; + } + if (queue.length) { + drainQueue(); + } +} + +function drainQueue() { + if (draining) { + return; + } + var timeout = runTimeout(cleanUpNextTick); + draining = true; + + var len = queue.length; + while(len) { + currentQueue = queue; + queue = []; + while (++queueIndex < len) { + if (currentQueue) { + currentQueue[queueIndex].run(); + } + } + queueIndex = -1; + len = queue.length; + } + currentQueue = null; + draining = false; + runClearTimeout(timeout); +} + +process.nextTick = function (fun) { + var args = new Array(arguments.length - 1); + if (arguments.length > 1) { + for (var i = 1; i < arguments.length; i++) { + args[i - 1] = arguments[i]; + } + } + queue.push(new Item(fun, args)); + if (queue.length === 1 && !draining) { + runTimeout(drainQueue); + } +}; + +// v8 likes predictible objects +function Item(fun, array) { + this.fun = fun; + this.array = array; +} +Item.prototype.run = function () { + this.fun.apply(null, this.array); +}; +process.title = 'browser'; +process.browser = true; +process.env = {}; +process.argv = []; +process.version = ''; // empty string to avoid regexp issues +process.versions = {}; + +function noop() {} + +process.on = noop; +process.addListener = noop; +process.once = noop; +process.off = noop; +process.removeListener = noop; +process.removeAllListeners = noop; +process.emit = noop; +process.prependListener = noop; +process.prependOnceListener = noop; + +process.listeners = function (name) { return [] } + +process.binding = function (name) { + throw new Error('process.binding is not supported'); +}; + +process.cwd = function () { return '/' }; +process.chdir = function (dir) { + throw new Error('process.chdir is not supported'); +}; +process.umask = function() { return 0; }; + + +/***/ }), +/* 36 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "isMobile", function() { return isMobile; }); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "isBrowser", function() { return isBrowser; }); +/** + * @license + * Copyright 2017 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +// tslint:disable-next-line:no-any +function _isNavigatorDefined() { + return typeof navigator !== 'undefined' && navigator != null; +} +function isMobile() { + if (_isNavigatorDefined()) { + // tslint:disable-next-line:no-any + const a = navigator.userAgent || navigator.vendor || window.opera; + // tslint:disable-next-line:max-line-length + return /(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino/i + .test(a) || + // tslint:disable-next-line:max-line-length + /1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-/i + .test(a.substr(0, 4)); + } + return false; +} +function isBrowser() { + return (typeof window !== 'undefined' && window.document != null) || + //@ts-ignore + (typeof WorkerGlobalScope !== 'undefined'); +} +//# sourceMappingURL=device_util.js.map + +/***/ }), +/* 37 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return maxImpl; }); +/* harmony import */ var _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(0); +/** + * @license + * Copyright 2020 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + +function maxImpl(aVals, reduceSize, outShape, dtype) { + const vals = _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].getTypedArrayFromDType(dtype, _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["util"].sizeFromShape(outShape)); + for (let i = 0; i < vals.length; ++i) { + const offset = i * reduceSize; + let max = aVals[offset]; + for (let j = 0; j < reduceSize; ++j) { + const value = aVals[offset + j]; + if (value > max) { + max = value; + } + } + vals[i] = max; + } + return vals; +} +//# sourceMappingURL=Max_impl.js.map + +/***/ }), +/* 38 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +// ESM COMPAT FLAG +__webpack_require__.r(__webpack_exports__); + +// EXPORTS +__webpack_require__.d(__webpack_exports__, "GraphModel", function() { return /* reexport */ graph_model_GraphModel; }); +__webpack_require__.d(__webpack_exports__, "loadGraphModel", function() { return /* reexport */ loadGraphModel; }); +__webpack_require__.d(__webpack_exports__, "deregisterOp", function() { return /* reexport */ register["a" /* deregisterOp */]; }); +__webpack_require__.d(__webpack_exports__, "registerOp", function() { return /* reexport */ register["c" /* registerOp */]; }); +__webpack_require__.d(__webpack_exports__, "version_converter", function() { return /* reexport */ version; }); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-core/dist/index.js + 269 modules +var dist = __webpack_require__(0); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/operation_mapper.js +var operation_mapper = __webpack_require__(15); + +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/utils.js +var utils = __webpack_require__(2); + +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/custom_op/node_value_impl.js +/** + * @license + * Copyright 2019 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +/** + * Helper class for lookup inputs and params for nodes in the model graph. + */ +class node_value_impl_NodeValueImpl { + constructor(node, tensorMap, context) { + this.node = node; + this.tensorMap = tensorMap; + this.context = context; + this.inputs = []; + this.attrs = {}; + this.inputs = node.inputNames.map(name => this.getInput(name)); + if (node.rawAttrs != null) { + this.attrs = Object.keys(node.rawAttrs) + .reduce((attrs, key) => { + attrs[key] = this.getAttr(key); + return attrs; + }, {}); + } + } + /** + * Return the value of the attribute or input param. + * @param name String: name of attribute or input param. + */ + getInput(name) { + return Object(utils["c" /* getTensor */])(name, this.tensorMap, this.context); + } + /** + * Return the value of the attribute or input param. + * @param name String: name of attribute or input param. + */ + getAttr(name, defaultValue) { + const value = this.node.rawAttrs[name]; + if (value.tensor != null) { + return Object(utils["c" /* getTensor */])(name, this.tensorMap, this.context); + } + if (value.i != null || value.f != null) { + return Object(operation_mapper["f" /* getNumberParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.s != null) { + return Object(operation_mapper["i" /* getStringParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.b != null) { + return Object(operation_mapper["c" /* getBoolParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.shape != null) { + return Object(operation_mapper["k" /* getTensorShapeParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.type != null) { + return Object(operation_mapper["e" /* getDtypeParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.list != null) { + if (value.list.i != null || value.list.f != null) { + return Object(operation_mapper["g" /* getNumericArrayParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.list.s != null) { + return Object(operation_mapper["h" /* getStringArrayParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.list.shape != null) { + return Object(operation_mapper["j" /* getTensorShapeArrayParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.list.b != null) { + return Object(operation_mapper["b" /* getBoolArrayParam */])(this.node.rawAttrs, name, defaultValue); + } + if (value.list.type != null) { + return Object(operation_mapper["d" /* getDtypeArrayParam */])(this.node.rawAttrs, name, defaultValue); + } + } + return defaultValue; + } +} +//# sourceMappingURL=node_value_impl.js.map +// EXTERNAL MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/custom_op/register.js +var register = __webpack_require__(24); + +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/arithmetic_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'BiasAdd': + case 'AddV2': + case 'Add': { + return [dist["add"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'AddN': { + return [dist["addN"](Object(utils["b" /* getParamValue */])('tensors', node, tensorMap, context))]; + } + case 'FloorMod': + case 'Mod': + return [dist["mod"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + case 'Mul': + return [dist["mul"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + case 'RealDiv': + case 'Div': { + return [dist["div"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'DivNoNan': { + return [dist["divNoNan"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'FloorDiv': { + return [dist["floorDiv"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'Sub': { + return [dist["sub"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'Minimum': { + return [dist["minimum"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'Maximum': { + return [dist["maximum"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'Pow': { + return [dist["pow"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'SquaredDifference': { + return [dist["squaredDifference"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const CATEGORY = 'arithmetic'; +//# sourceMappingURL=arithmetic_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/basic_math_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const basic_math_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'Abs': + case 'ComplexAbs': + return [dist["abs"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Acos': + return [dist["acos"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Acosh': + return [dist["acosh"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Asin': + return [dist["asin"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Asinh': + return [dist["asinh"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Atan': + return [dist["atan"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Atan2': + return [dist["atan2"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('y', node, tensorMap, context))]; + case 'Atanh': + return [dist["atanh"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Ceil': + return [dist["ceil"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Complex': + return [dist["complex"](Object(utils["b" /* getParamValue */])('real', node, tensorMap, context), Object(utils["b" /* getParamValue */])('imag', node, tensorMap, context))]; + case 'Cos': + return [dist["cos"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Cosh': + return [dist["cosh"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Elu': + return [dist["elu"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Erf': + return [dist["erf"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Exp': + return [dist["exp"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Expm1': { + return [dist["expm1"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Floor': + return [dist["floor"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Log': + return [dist["log"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Log1p': { + return [dist["log1p"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Imag': + return [dist["imag"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Neg': + return [dist["neg"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Reciprocal': { + return [dist["reciprocal"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Real': + return [dist["real"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Relu': + return [dist["relu"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Round': { + return [dist["round"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Selu': + return [dist["selu"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Sigmoid': + return [dist["sigmoid"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Sin': + return [dist["sin"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Sign': { + return [dist["sign"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Sinh': { + return [dist["sinh"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Softplus': { + return [dist["softplus"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Sqrt': { + return [dist["sqrt"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Square': { + return [dist["square"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Tanh': { + return [dist["tanh"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'Tan': + return [dist["tan"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + case 'Relu6': + case 'ClipByValue': + return [dist["clipByValue"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('clipValueMin', node, tensorMap, context), Object(utils["b" /* getParamValue */])('clipValueMax', node, tensorMap, context))]; + case 'Rsqrt': + return [dist["rsqrt"](Object(utils["c" /* getTensor */])(node.inputNames[0], tensorMap, context))]; + case 'Prod': + return [dist["prod"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('axes', node, tensorMap, context))]; + case 'LeakyRelu': + return [dist["leakyRelu"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('alpha', node, tensorMap, context))]; + case 'Prelu': + return [dist["prelu"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('alpha', node, tensorMap, context))]; + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const basic_math_executor_CATEGORY = 'basic_math'; +//# sourceMappingURL=basic_math_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/executor/tensor_utils.js +/** + * @license + * Copyright 2020 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +/** + * This differs from util.assertShapesMatch in that it allows values of + * negative one, an undefined size of a dimensinon, in a shape to match + * anything. + */ + +function assertShapesMatchAllowUndefinedSize(shapeA, shapeB, errorMessagePrefix = '') { + dist["util"].assert(shapesEqualAllowUndefinedSize(shapeA, shapeB), () => errorMessagePrefix + ` Shapes ${shapeA} and ${shapeB} must match`); +} +function shapesEqualAllowUndefinedSize(n1, n2) { + if (n1.length !== n2.length) { + return false; + } + for (let i = 0; i < n1.length; i++) { + if (n1[i] !== -1 && n2[i] !== -1 && n1[i] !== n2[i]) { + return false; + } + } + return true; +} +//# sourceMappingURL=tensor_utils.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/executor/tensor_array.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +/** + * The TensorArray object keeps an array of Tensors. It + * allows reading from the array and writing to the array. + */ +class tensor_array_TensorArray { + constructor(name, dtype, maxSize, elementShape, identicalElementShapes, dynamicSize, clearAfterRead) { + this.name = name; + this.dtype = dtype; + this.maxSize = maxSize; + this.elementShape = elementShape; + this.identicalElementShapes = identicalElementShapes; + this.dynamicSize = dynamicSize; + this.clearAfterRead = clearAfterRead; + this.tensors = []; + this.closed_ = false; + this.id = tensor_array_TensorArray.nextId++; + } + get closed() { + return this.closed_; + } + /** + * Close the current TensorArray. + */ + clearAndClose() { + this.tensors.forEach(tensor => tensor.tensor.dispose()); + this.tensors = []; + this.closed_ = true; + } + size() { + return this.tensors.length; + } + /** + * Read the value at location index in the TensorArray. + * @param index Number the index to read from. + */ + read(index) { + if (this.closed_) { + throw new Error(`TensorArray ${this.name} has already been closed.`); + } + if (index < 0 || index >= this.size()) { + throw new Error(`Tried to read from index ${index}, but array size is: ${this.size()}`); + } + const tensorWithState = this.tensors[index]; + if (tensorWithState.cleared) { + throw new Error(`TensorArray ${this.name}: Could not read index ${index} twice because it was cleared after a previous read ` + + `(perhaps try setting clear_after_read = false?).`); + } + if (this.clearAfterRead) { + tensorWithState.cleared = true; + } + tensorWithState.read = true; + return tensorWithState.tensor; + } + /** + * Helper method to read multiple tensors from the specified indices. + */ + readMany(indices) { + return indices.map(index => this.read(index)); + } + /** + * Write value into the index of the TensorArray. + * @param index number the index to write to. + * @param tensor + */ + write(index, tensor) { + if (this.closed_) { + throw new Error(`TensorArray ${this.name} has already been closed.`); + } + if (index < 0 || !this.dynamicSize && index >= this.maxSize) { + throw new Error(`Tried to write to index ${index}, but array is not resizeable and size is: ${this.maxSize}`); + } + const t = this.tensors[index] || {}; + if (tensor.dtype !== this.dtype) { + throw new Error(`TensorArray ${this.name}: Could not write to TensorArray index ${index}, + because the value dtype is ${tensor.dtype}, but TensorArray dtype is ${this.dtype}.`); + } + // Set the shape for the first time write to unknow shape tensor array + if (this.size() === 0 && + (this.elementShape == null || this.elementShape.length === 0)) { + this.elementShape = tensor.shape; + } + assertShapesMatchAllowUndefinedSize(this.elementShape, tensor.shape, `TensorArray ${this.name}: Could not write to TensorArray index ${index}.`); + if (t && t.read) { + throw new Error(`TensorArray ${this.name}: Could not write to TensorArray index ${index}, because it has already been read.`); + } + if (t && t.written) { + throw new Error(`TensorArray ${this.name}: Could not write to TensorArray index ${index}, because it has already been written.`); + } + t.tensor = tensor; + t.written = true; + this.tensors[index] = t; + } + /** + * Helper method to write multiple tensors to the specified indices. + */ + writeMany(indices, tensors) { + if (indices.length !== tensors.length) { + throw new Error(`TensorArray ${this.name}: could not write multiple tensors,` + + `because the index size: ${indices.length} is not the same as tensors size: ${tensors.length}.`); + } + indices.forEach((i, index) => this.write(i, tensors[index])); + } + /** + * Return selected values in the TensorArray as a packed Tensor. All of + * selected values must have been written and their shapes must all match. + * @param [indices] number[] Optional. Taking values in [0, max_value). If the + * TensorArray is not dynamic, max_value=size(). If not specified returns + * all tensors in the original order. + * @param [dtype] + */ + gather(indices, dtype) { + if (!!dtype && dtype !== this.dtype) { + throw new Error(`TensorArray dtype is ${this.dtype} but gather requested dtype ${dtype}`); + } + if (!indices) { + indices = []; + for (let i = 0; i < this.size(); i++) { + indices.push(i); + } + } + else { + indices = indices.slice(0, this.size()); + } + if (indices.length === 0) { + return Object(dist["tensor"])([], [0].concat(this.elementShape)); + } + // Read all the PersistentTensors into a vector to keep track of + // their memory. + const tensors = this.readMany(indices); + assertShapesMatchAllowUndefinedSize(this.elementShape, tensors[0].shape, 'TensorArray shape mismatch: '); + return Object(dist["stack"])(tensors, 0); + } + /** + * Return the values in the TensorArray as a concatenated Tensor. + */ + concat(dtype) { + if (!!dtype && dtype !== this.dtype) { + throw new Error(`TensorArray dtype is ${this.dtype} but concat requested dtype ${dtype}`); + } + if (this.size() === 0) { + return Object(dist["tensor"])([], [0].concat(this.elementShape)); + } + const indices = []; + for (let i = 0; i < this.size(); i++) { + indices.push(i); + } + // Collect all the tensors from the tensors array. + const tensors = this.readMany(indices); + assertShapesMatchAllowUndefinedSize(this.elementShape, tensors[0].shape, `TensorArray shape mismatch: tensor array shape (${this.elementShape}) vs first tensor shape (${tensors[0].shape})`); + return Object(dist["concat"])(tensors, 0); + } + /** + * Scatter the values of a Tensor in specific indices of a TensorArray. + * @param indices nummber[] values in [0, max_value). If the + * TensorArray is not dynamic, max_value=size(). + * @param tensor Tensor input tensor. + */ + scatter(indices, tensor) { + if (tensor.dtype !== this.dtype) { + throw new Error(`TensorArray dtype is ${this.dtype} but tensor has dtype ${tensor.dtype}`); + } + if (indices.length !== tensor.shape[0]) { + throw new Error(`Expected len(indices) == tensor.shape[0], but saw: ${indices.length} vs. ${tensor.shape[0]}`); + } + const maxIndex = Math.max(...indices); + if (!this.dynamicSize && maxIndex >= this.maxSize) { + throw new Error(`Max index must be < array size (${maxIndex} vs. ${this.maxSize})`); + } + this.writeMany(indices, Object(dist["unstack"])(tensor, 0)); + } + /** + * Split the values of a Tensor into the TensorArray. + * @param length number[] with the lengths to use when splitting value along + * its first dimension. + * @param tensor Tensor, the tensor to split. + */ + split(length, tensor) { + if (tensor.dtype !== this.dtype) { + throw new Error(`TensorArray dtype is ${this.dtype} but tensor has dtype ${tensor.dtype}`); + } + let totalLength = 0; + const cumulativeLengths = length.map(len => { + totalLength += len; + return totalLength; + }); + if (totalLength !== tensor.shape[0]) { + throw new Error(`Expected sum of lengths to be equal to + tensor.shape[0], but sum of lengths is + ${totalLength}, and tensor's shape is: ${tensor.shape}`); + } + if (!this.dynamicSize && length.length !== this.maxSize) { + throw new Error(`TensorArray's size is not equal to the size of lengths (${this.maxSize} vs. ${length.length}), ` + + 'and the TensorArray is not marked as dynamically resizeable'); + } + const elementPerRow = totalLength === 0 ? 0 : tensor.size / totalLength; + const tensors = []; + Object(dist["tidy"])(() => { + tensor = tensor.reshape([1, totalLength, elementPerRow]); + for (let i = 0; i < length.length; ++i) { + const previousLength = (i === 0) ? 0 : cumulativeLengths[i - 1]; + const indices = [0, previousLength, 0]; + const sizes = [1, length[i], elementPerRow]; + tensors[i] = Object(dist["slice"])(tensor, indices, sizes).reshape(this.elementShape); + } + return tensors; + }); + const indices = []; + for (let i = 0; i < length.length; i++) { + indices[i] = i; + } + this.writeMany(indices, tensors); + } +} +tensor_array_TensorArray.nextId = 0; +//# sourceMappingURL=tensor_array.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/control_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + +const control_executor_executeOp = async (node, tensorMap, context) => { + switch (node.op) { + case 'If': + case 'StatelessIf': { + const thenFunc = Object(utils["b" /* getParamValue */])('thenBranch', node, tensorMap, context); + const elseFunc = Object(utils["b" /* getParamValue */])('elseBranch', node, tensorMap, context); + const cond = Object(utils["b" /* getParamValue */])('cond', node, tensorMap, context); + const args = Object(utils["b" /* getParamValue */])('args', node, tensorMap, context); + const condValue = await cond.data(); + if (condValue[0]) { + return context.functionMap[thenFunc].executeFunctionAsync(args); + } + else { + return context.functionMap[elseFunc].executeFunctionAsync(args); + } + } + case 'While': + case 'StatelessWhile': { + const bodyFunc = Object(utils["b" /* getParamValue */])('body', node, tensorMap, context); + const condFunc = Object(utils["b" /* getParamValue */])('cond', node, tensorMap, context); + const args = Object(utils["b" /* getParamValue */])('args', node, tensorMap, context); + const condTensor = (await context.functionMap[condFunc].executeFunctionAsync(args))[0]; + let condValue = await condTensor.data(); + let result = args; + while (condValue[0]) { + result = + await context.functionMap[bodyFunc].executeFunctionAsync(result); + const condTensor = (await context.functionMap[condFunc].executeFunctionAsync(result))[0]; + condValue = await condTensor.data(); + } + return result; + } + case 'LoopCond': + return [ + Object(utils["b" /* getParamValue */])('pred', node, tensorMap, context).clone() + ]; + case 'Switch': { + const pred = Object(utils["b" /* getParamValue */])('pred', node, tensorMap, context); + const data = Object(utils["b" /* getParamValue */])('data', node, tensorMap, context); + // Outputs nodes :0 => false, :1 => true + return (await pred.data())[0] ? [undefined, data.clone()] : + [data.clone(), undefined]; + } + case 'Merge': + const inputName = node.inputNames.find(name => Object(utils["c" /* getTensor */])(name, tensorMap, context) !== undefined); + return inputName ? [Object(utils["c" /* getTensor */])(inputName, tensorMap, context).clone()] : + undefined; + case 'Enter': + const frameId = Object(utils["b" /* getParamValue */])('frameName', node, tensorMap, context); + const data = Object(utils["b" /* getParamValue */])('tensor', node, tensorMap, context); + context.enterFrame(frameId); + return [data.clone()]; + case 'Exit': + const tensor = Object(utils["b" /* getParamValue */])('tensor', node, tensorMap, context); + context.exitFrame(); + return [tensor.clone()]; + case 'NextIteration': + const input = Object(utils["b" /* getParamValue */])('tensor', node, tensorMap, context); + context.nextIteration(); + return [input.clone()]; + case 'TensorArrayV3': + const size = Object(utils["b" /* getParamValue */])('size', node, tensorMap, context); + const dtype = Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context); + const elementShape = Object(utils["b" /* getParamValue */])('elementShape', node, tensorMap, context); + const dynamicSize = Object(utils["b" /* getParamValue */])('dynamicSize', node, tensorMap, context); + const clearAfterRead = Object(utils["b" /* getParamValue */])('clearAfterRead', node, tensorMap, context); + const identicalElementShapes = Object(utils["b" /* getParamValue */])('identicalElementShapes', node, tensorMap, context); + const name = Object(utils["b" /* getParamValue */])('name', node, tensorMap, context); + const tensorArray = new tensor_array_TensorArray(name, dtype, size, elementShape, identicalElementShapes, dynamicSize, clearAfterRead); + context.addTensorArray(tensorArray); + return [Object(dist["scalar"])(tensorArray.id), Object(dist["scalar"])(1.0)]; + case 'TensorArrayWriteV3': + const id = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const index = Object(utils["b" /* getParamValue */])('index', node, tensorMap, context); + const writeTensor = Object(utils["b" /* getParamValue */])('tensor', node, tensorMap, context); + const writeTensorArray = context.getTensorArray(id); + writeTensorArray.write(index, writeTensor); + return [Object(dist["scalar"])(1.0)]; + case 'TensorArrayReadV3': + const readId = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const readIndex = Object(utils["b" /* getParamValue */])('index', node, tensorMap, context); + const readTensorArray = context.getTensorArray(readId); + return [readTensorArray.read(readIndex)]; + case 'TensorArrayGatherV3': + const gatherId = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const gatherIndices = Object(utils["b" /* getParamValue */])('indices', node, tensorMap, context); + const gatherDtype = Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context); + const gatherTensorArray = context.getTensorArray(gatherId); + return [gatherTensorArray.gather(gatherIndices, gatherDtype)]; + case 'TensorArrayScatterV3': + const scatterId = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const scatterIndices = Object(utils["b" /* getParamValue */])('indices', node, tensorMap, context); + const scatterTensor = Object(utils["b" /* getParamValue */])('tensor', node, tensorMap, context); + const scatterTensorArray = context.getTensorArray(scatterId); + scatterTensorArray.scatter(scatterIndices, scatterTensor); + return [Object(dist["scalar"])(1.0)]; + case 'TensorArrayConcatV3': + const concatId = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const concatTensorArray = context.getTensorArray(concatId); + const concatDtype = Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context); + return [concatTensorArray.concat(concatDtype)]; + case 'TensorArraySplitV3': + const splitId = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const splitTensor = Object(utils["b" /* getParamValue */])('tensor', node, tensorMap, context); + const lengths = Object(utils["b" /* getParamValue */])('lengths', node, tensorMap, context); + const splitTensorArray = context.getTensorArray(splitId); + splitTensorArray.split(lengths, splitTensor); + return [Object(dist["scalar"])(1.0)]; + case 'TensorArraySizeV3': + const sizeId = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const sizeTensorArray = context.getTensorArray(sizeId); + return [Object(dist["scalar"])(sizeTensorArray.size(), 'int32')]; + case 'TensorArrayCloseV3': + const closeId = Object(utils["b" /* getParamValue */])('tensorArrayId', node, tensorMap, context); + const closeTensorArray = context.getTensorArray(closeId); + closeTensorArray.clearAndClose(); + return [Object(dist["scalar"])(0)]; + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const control_executor_CATEGORY = 'control'; +//# sourceMappingURL=control_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/convolution_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const convolution_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'Conv1D': { + const stride = Object(utils["b" /* getParamValue */])('stride', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const dataFormat = Object(utils["b" /* getParamValue */])('dataFormat', node, tensorMap, context) + .toUpperCase(); + const dilation = Object(utils["b" /* getParamValue */])('dilation', node, tensorMap, context); + return [dist["conv1d"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('filter', node, tensorMap, context), stride, pad, dataFormat, dilation)]; + } + case 'Conv2D': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const dataFormat = Object(utils["b" /* getParamValue */])('dataFormat', node, tensorMap, context) + .toUpperCase(); + const dilations = Object(utils["b" /* getParamValue */])('dilations', node, tensorMap, context); + return [dist["conv2d"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('filter', node, tensorMap, context), [stride[1], stride[2]], pad, dataFormat, [dilations[1], dilations[2]])]; + } + case '_FusedConv2D': + case 'FusedDepthwiseConv2dNative': { + const [extraOp, activationFunc] = Object(utils["b" /* getParamValue */])('fusedOps', node, tensorMap, context); + const isBiasAdd = extraOp === 'biasadd'; + const isPrelu = activationFunc === 'prelu'; + const isBatchNorm = extraOp === 'fusedbatchnorm'; + const numArgs = Object(utils["b" /* getParamValue */])('numArgs', node, tensorMap, context); + if (isBiasAdd) { + if (isPrelu && numArgs !== 2) { + throw new Error('FusedConv2d and DepthwiseConv2d with BiasAdd and Prelu ' + + 'must have two extra arguments: bias and alpha.'); + } + if (!isPrelu && numArgs !== 1) { + throw new Error('FusedConv2d and DepthwiseConv2d with BiasAdd must have ' + + 'one extra argument: bias.'); + } + } + if (isBatchNorm) { + throw new Error('FusedConv2d and DepthwiseConv2d with FusedBatchNorm is not supported.'); + } + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const dataFormat = Object(utils["b" /* getParamValue */])('dataFormat', node, tensorMap, context) + .toUpperCase(); + const dilations = Object(utils["b" /* getParamValue */])('dilations', node, tensorMap, context); + const [biasArg, preluArg] = Object(utils["b" /* getParamValue */])('args', node, tensorMap, context); + const kernelMethod = node.op === '_FusedConv2D' ? + dist["fused"].conv2d : + dist["fused"].depthwiseConv2d; + return [kernelMethod({ + x: Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), + filter: Object(utils["b" /* getParamValue */])('filter', node, tensorMap, context), + strides: [stride[1], stride[2]], + pad: pad, + dataFormat: dataFormat, + dilations: [dilations[1], dilations[2]], + bias: biasArg, + activation: activationFunc, + preluActivationWeights: preluArg + })]; + } + case 'Conv2DBackpropInput': + case 'Conv2dTranspose': { + const shape = Object(utils["b" /* getParamValue */])('outputShape', node, tensorMap, context); + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + return [dist["conv2dTranspose"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('filter', node, tensorMap, context), shape, [stride[1], stride[2]], pad)]; + } + case 'DepthwiseConv2dNative': + case 'DepthwiseConv2d': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const dilations = Object(utils["b" /* getParamValue */])('dilations', node, tensorMap, context); + const dataFormat = Object(utils["b" /* getParamValue */])('dataFormat', node, tensorMap, context) + .toUpperCase(); + return [dist["depthwiseConv2d"](Object(utils["b" /* getParamValue */])('input', node, tensorMap, context), Object(utils["b" /* getParamValue */])('filter', node, tensorMap, context), [stride[1], stride[2]], pad, dataFormat, [dilations[1], dilations[2]])]; + } + case 'Conv3D': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const dataFormat = Object(utils["b" /* getParamValue */])('dataFormat', node, tensorMap, context) + .toUpperCase(); + const dilations = Object(utils["b" /* getParamValue */])('dilations', node, tensorMap, context); + return [dist["conv3d"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('filter', node, tensorMap, context), [stride[1], stride[2], stride[3]], pad, dataFormat, [dilations[1], dilations[2], dilations[3]])]; + } + case 'AvgPool': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const kernelSize = Object(utils["b" /* getParamValue */])('kernelSize', node, tensorMap, context); + return [dist["avgPool"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), [kernelSize[1], kernelSize[2]], [stride[1], stride[2]], pad)]; + } + case 'MaxPool': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const kernelSize = Object(utils["b" /* getParamValue */])('kernelSize', node, tensorMap, context); + return [dist["maxPool"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), [kernelSize[1], kernelSize[2]], [stride[1], stride[2]], pad)]; + } + case 'MaxPoolWithArgmax': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const kernelSize = Object(utils["b" /* getParamValue */])('kernelSize', node, tensorMap, context); + const includeBatchInIndex = Object(utils["b" /* getParamValue */])('includeBatchInIndex', node, tensorMap, context); + const { result, indexes } = dist["maxPoolWithArgmax"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), [kernelSize[1], kernelSize[2]], [stride[1], stride[2]], pad, includeBatchInIndex); + return [result, indexes]; + } + case 'AvgPool3D': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const kernelSize = Object(utils["b" /* getParamValue */])('kernelSize', node, tensorMap, context); + return [dist["avgPool3d"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), [kernelSize[1], kernelSize[2], kernelSize[3]], [stride[1], stride[2], stride[3]], pad)]; + } + case 'MaxPool3D': { + const stride = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const pad = Object(utils["b" /* getParamValue */])('pad', node, tensorMap, context); + const kernelSize = Object(utils["b" /* getParamValue */])('kernelSize', node, tensorMap, context); + return [dist["maxPool3d"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), [kernelSize[1], kernelSize[2], kernelSize[3]], [stride[1], stride[2], stride[3]], pad)]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const convolution_executor_CATEGORY = 'convolution'; +//# sourceMappingURL=convolution_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/creation_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const creation_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'Fill': { + const shape = Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context); + const dtype = Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context); + const value = Object(utils["b" /* getParamValue */])('value', node, tensorMap, context); + return [dist["fill"](shape, value, dtype)]; + } + case 'LinSpace': { + const start = Object(utils["b" /* getParamValue */])('start', node, tensorMap, context); + const stop = Object(utils["b" /* getParamValue */])('stop', node, tensorMap, context); + const num = Object(utils["b" /* getParamValue */])('num', node, tensorMap, context); + return [dist["linspace"](start, stop, num)]; + } + case 'Multinomial': { + const logits = Object(utils["b" /* getParamValue */])('logits', node, tensorMap, context); + const numSamples = Object(utils["b" /* getParamValue */])('numSamples', node, tensorMap, context); + const seed = Object(utils["b" /* getParamValue */])('seed', node, tensorMap, context); + return [dist["multinomial"](logits, numSamples, seed)]; + } + case 'OneHot': { + const indices = Object(utils["b" /* getParamValue */])('indices', node, tensorMap, context); + const depth = Object(utils["b" /* getParamValue */])('depth', node, tensorMap, context); + const onValue = Object(utils["b" /* getParamValue */])('onValue', node, tensorMap, context); + const offValue = Object(utils["b" /* getParamValue */])('offValue', node, tensorMap, context); + return [dist["oneHot"](indices, depth, onValue, offValue)]; + } + case 'Ones': { + return [dist["ones"](Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context), Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context))]; + } + case 'OnesLike': { + return [dist["onesLike"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'RandomUniform': { + return [dist["randomUniform"]( + // tslint:disable-next-line:no-any + Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context), Object(utils["b" /* getParamValue */])('minval', node, tensorMap, context), Object(utils["b" /* getParamValue */])('maxval', node, tensorMap, context), Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context))]; + } + case 'Range': { + const start = Object(utils["b" /* getParamValue */])('start', node, tensorMap, context); + const stop = Object(utils["b" /* getParamValue */])('stop', node, tensorMap, context); + const step = Object(utils["b" /* getParamValue */])('step', node, tensorMap, context); + return [dist["range"](start, stop, step, Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context))]; + } + case 'TruncatedNormal': { + const shape = Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context); + const mean = Object(utils["b" /* getParamValue */])('mean', node, tensorMap, context); + const stdDev = Object(utils["b" /* getParamValue */])('stdDev', node, tensorMap, context); + const seed = Object(utils["b" /* getParamValue */])('seed', node, tensorMap, context); + return [dist["truncatedNormal"](shape, mean, stdDev, Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context), seed)]; + } + case 'Zeros': { + return [dist["zeros"](Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context), Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context))]; + } + case 'ZerosLike': { + return [dist["zerosLike"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const creation_executor_CATEGORY = 'creation'; +//# sourceMappingURL=creation_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/dynamic_executor.js +/** + * @license + * Copyright 2018 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const dynamic_executor_executeOp = async (node, tensorMap, context) => { + switch (node.op) { + case 'NonMaxSuppressionV5': + case 'NonMaxSuppressionV3': + case 'NonMaxSuppressionV2': { + const boxes = Object(utils["b" /* getParamValue */])('boxes', node, tensorMap, context); + const scores = Object(utils["b" /* getParamValue */])('scores', node, tensorMap, context); + const maxOutputSize = Object(utils["b" /* getParamValue */])('maxOutputSize', node, tensorMap, context); + const iouThreshold = Object(utils["b" /* getParamValue */])('iouThreshold', node, tensorMap, context); + const scoreThreshold = Object(utils["b" /* getParamValue */])('scoreThreshold', node, tensorMap, context); + if (node.op === 'NonMaxSuppressionV5') { + const softNmsSigma = Object(utils["b" /* getParamValue */])('softNmsSigma', node, tensorMap, context); + const result = await dist["image"].nonMaxSuppressionWithScoreAsync(boxes, scores, maxOutputSize, iouThreshold, scoreThreshold, softNmsSigma); + return [result.selectedIndices, result.selectedScores]; + } + return [await dist["image"].nonMaxSuppressionAsync(boxes, scores, maxOutputSize, iouThreshold, scoreThreshold)]; + } + case 'Where': { + const condition = Object(utils["b" /* getParamValue */])('condition', node, tensorMap, context) + .asType('bool'); + const result = [await dist["whereAsync"](condition)]; + condition.dispose(); + return result; + } + case 'ListDiff': { + return dist["setdiff1dAsync"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('y', node, tensorMap, context)); + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const dynamic_executor_CATEGORY = 'dynamic'; +//# sourceMappingURL=dynamic_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/evaluation_executor.js +/** + * @license + * Copyright 2018 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const evaluation_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'TopKV2': { + const x = Object(utils["b" /* getParamValue */])('x', node, tensorMap, context); + const k = Object(utils["b" /* getParamValue */])('k', node, tensorMap, context); + const sorted = Object(utils["b" /* getParamValue */])('sorted', node, tensorMap, context); + const result = dist["topk"](x, k, sorted); + return [result.values, result.indices]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const evaluation_executor_CATEGORY = 'evaluation'; +//# sourceMappingURL=evaluation_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/graph_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const graph_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'Const': { + return tensorMap[node.name]; + } + case 'PlaceholderWithDefault': + const def = Object(utils["b" /* getParamValue */])('default', node, tensorMap, context); + return [Object(utils["c" /* getTensor */])(node.name, tensorMap, context) || def]; + case 'Placeholder': + return [Object(utils["c" /* getTensor */])(node.name, tensorMap, context)]; + case 'Identity': + case 'StopGradient': + case 'FakeQuantWithMinMaxVars': // This op is currently ignored. + return [ + Object(utils["b" /* getParamValue */])('x', node, tensorMap, context).clone() + ]; + case 'IdentityN': + return Object(utils["b" /* getParamValue */])('x', node, tensorMap, context) + .map((t) => t.clone()); + case 'Snapshot': + const snapshot = Object(utils["b" /* getParamValue */])('x', node, tensorMap, context); + return [snapshot.clone()]; + case 'Shape': + return [dist["tensor1d"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context).shape, 'int32')]; + case 'ShapeN': + return Object(utils["b" /* getParamValue */])('x', node, tensorMap, context) + .map((t) => dist["tensor1d"](t.shape)); + case 'Size': + return [dist["scalar"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context).size, 'int32')]; + case 'Rank': + return [dist["scalar"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context).rank, 'int32')]; + case 'NoOp': + return [dist["scalar"](1)]; + case 'Print': + const input = Object(utils["b" /* getParamValue */])('x', node, tensorMap, context); + const data = Object(utils["b" /* getParamValue */])('data', node, tensorMap, context); + const message = Object(utils["b" /* getParamValue */])('message', node, tensorMap, context); + const summarize = Object(utils["b" /* getParamValue */])('summarize', node, tensorMap, context); + console.warn('The graph has a tf.print() operation,' + + 'usually used for debugging, which slows down performance.'); + console.log(message); + for (let i = 0; i < data.length; i++) { + console.log(Array.prototype.slice.call(data[i].dataSync()).slice(0, summarize)); + } + return [input]; + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const graph_executor_CATEGORY = 'graph'; +//# sourceMappingURL=graph_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/image_executor.js +/** + * @license + * Copyright 2018 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const image_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'ResizeBilinear': { + const images = Object(utils["b" /* getParamValue */])('images', node, tensorMap, context); + const size = Object(utils["b" /* getParamValue */])('size', node, tensorMap, context); + const alignCorners = Object(utils["b" /* getParamValue */])('alignCorners', node, tensorMap, context); + return [dist["image"].resizeBilinear(images, [size[0], size[1]], alignCorners)]; + } + case 'ResizeNearestNeighbor': { + const images = Object(utils["b" /* getParamValue */])('images', node, tensorMap, context); + const size = Object(utils["b" /* getParamValue */])('size', node, tensorMap, context); + const alignCorners = Object(utils["b" /* getParamValue */])('alignCorners', node, tensorMap, context); + return [dist["image"].resizeNearestNeighbor(images, [size[0], size[1]], alignCorners)]; + } + case 'CropAndResize': { + const image = Object(utils["b" /* getParamValue */])('image', node, tensorMap, context); + const boxes = Object(utils["b" /* getParamValue */])('boxes', node, tensorMap, context); + const boxInd = Object(utils["b" /* getParamValue */])('boxInd', node, tensorMap, context); + const cropSize = Object(utils["b" /* getParamValue */])('cropSize', node, tensorMap, context); + const method = Object(utils["b" /* getParamValue */])('method', node, tensorMap, context); + const extrapolationValue = Object(utils["b" /* getParamValue */])('extrapolationValue', node, tensorMap, context); + return [dist["image"].cropAndResize(image, boxes, boxInd, cropSize, method, extrapolationValue)]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const image_executor_CATEGORY = 'image'; +//# sourceMappingURL=image_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/logical_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const logical_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'Equal': { + return [dist["equal"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'NotEqual': { + return [dist["notEqual"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'Greater': { + return [dist["greater"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'GreaterEqual': { + return [dist["greaterEqual"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'Less': { + return [dist["less"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'LessEqual': { + return [dist["lessEqual"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'LogicalAnd': { + return [dist["logicalAnd"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'LogicalNot': { + return [dist["logicalNot"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context))]; + } + case 'LogicalOr': { + return [dist["logicalOr"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + case 'Select': + case 'SelectV2': { + return [dist["where"](Object(utils["b" /* getParamValue */])('condition', node, tensorMap, context), Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context))]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const logical_executor_CATEGORY = 'logical'; +//# sourceMappingURL=logical_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/matrices_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const matrices_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'BatchMatMul': + case 'BatchMatMulV2': + case 'MatMul': + return [dist["matMul"](Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), Object(utils["b" /* getParamValue */])('b', node, tensorMap, context), Object(utils["b" /* getParamValue */])('transposeA', node, tensorMap, context), Object(utils["b" /* getParamValue */])('transposeB', node, tensorMap, context))]; + case 'Transpose': + return [dist["transpose"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('perm', node, tensorMap, context))]; + case '_FusedMatMul': + const [extraOp, activationFunc] = Object(utils["b" /* getParamValue */])('fusedOps', node, tensorMap, context); + const isBiasAdd = extraOp === 'biasadd'; + const isPrelu = activationFunc === 'prelu'; + const numArgs = Object(utils["b" /* getParamValue */])('numArgs', node, tensorMap, context); + if (isBiasAdd) { + if (isPrelu && numArgs !== 2) { + throw new Error('Fused MatMul with BiasAdd and Prelu must have two ' + + 'extra arguments: bias and alpha.'); + } + if (!isPrelu && numArgs !== 1) { + throw new Error('Fused MatMul with BiasAdd must have one extra argument: bias.'); + } + } + const [biasArg, preluArg] = Object(utils["b" /* getParamValue */])('args', node, tensorMap, context); + return [dist["fused"].matMul({ + a: Object(utils["b" /* getParamValue */])('a', node, tensorMap, context), + b: Object(utils["b" /* getParamValue */])('b', node, tensorMap, context), + transposeA: Object(utils["b" /* getParamValue */])('transposeA', node, tensorMap, context), + transposeB: Object(utils["b" /* getParamValue */])('transposeB', node, tensorMap, context), + bias: biasArg, + activation: activationFunc, + preluActivationWeights: preluArg + })]; + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const matrices_executor_CATEGORY = 'matrices'; +//# sourceMappingURL=matrices_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/normalization_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const normalization_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'FusedBatchNorm': + case 'FusedBatchNormV2': { + return [dist["batchNorm"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('mean', node, tensorMap, context), Object(utils["b" /* getParamValue */])('variance', node, tensorMap, context), Object(utils["b" /* getParamValue */])('offset', node, tensorMap, context), Object(utils["b" /* getParamValue */])('scale', node, tensorMap, context), Object(utils["b" /* getParamValue */])('epsilon', node, tensorMap, context))]; + } + case 'FusedBatchNormV3': { + return [dist["batchNorm"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('mean', node, tensorMap, context), Object(utils["b" /* getParamValue */])('variance', node, tensorMap, context), Object(utils["b" /* getParamValue */])('offset', node, tensorMap, context), Object(utils["b" /* getParamValue */])('scale', node, tensorMap, context), Object(utils["b" /* getParamValue */])('epsilon', node, tensorMap, context))]; + } + case 'LRN': { + return [dist["localResponseNormalization"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('radius', node, tensorMap, context), Object(utils["b" /* getParamValue */])('bias', node, tensorMap, context), Object(utils["b" /* getParamValue */])('alpha', node, tensorMap, context), Object(utils["b" /* getParamValue */])('beta', node, tensorMap, context))]; + } + case 'Softmax': { + return [dist["softmax"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'LogSoftmax': { + return [dist["logSoftmax"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'SparseToDense': { + return [dist["sparseToDense"](Object(utils["b" /* getParamValue */])('sparseIndices', node, tensorMap, context), Object(utils["b" /* getParamValue */])('outputShape', node, tensorMap, context), Object(utils["b" /* getParamValue */])('sparseValues', node, tensorMap, context), Object(utils["b" /* getParamValue */])('defaultValue', node, tensorMap, context))]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const normalization_executor_CATEGORY = 'normalization'; +//# sourceMappingURL=normalization_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/reduction_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const reduction_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'Max': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const keepDims = Object(utils["b" /* getParamValue */])('keepDims', node, tensorMap, context); + return [dist["max"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, keepDims)]; + } + case 'Mean': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const keepDims = Object(utils["b" /* getParamValue */])('keepDims', node, tensorMap, context); + return [dist["mean"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, keepDims)]; + } + case 'Min': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const keepDims = Object(utils["b" /* getParamValue */])('keepDims', node, tensorMap, context); + return [dist["min"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, keepDims)]; + } + case 'Sum': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const keepDims = Object(utils["b" /* getParamValue */])('keepDims', node, tensorMap, context); + return [dist["sum"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, keepDims)]; + } + case 'All': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const keepDims = Object(utils["b" /* getParamValue */])('keepDims', node, tensorMap, context); + return [dist["all"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, keepDims)]; + } + case 'Any': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const keepDims = Object(utils["b" /* getParamValue */])('keepDims', node, tensorMap, context); + return [dist["any"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, keepDims)]; + } + case 'ArgMax': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + return [dist["argMax"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis)]; + } + case 'ArgMin': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + return [dist["argMin"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis)]; + } + case 'Prod': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const keepDims = Object(utils["b" /* getParamValue */])('keepDims', node, tensorMap, context); + return [dist["prod"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, keepDims)]; + } + case 'Cumsum': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const exclusive = Object(utils["b" /* getParamValue */])('exclusive', node, tensorMap, context); + const reverse = Object(utils["b" /* getParamValue */])('reverse', node, tensorMap, context); + return [dist["cumsum"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis, exclusive, reverse)]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const reduction_executor_CATEGORY = 'reduction'; +//# sourceMappingURL=reduction_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/slice_join_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const slice_join_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'ConcatV2': + case 'Concat': { + const n = Object(utils["b" /* getParamValue */])('n', node, tensorMap, context); + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + let inputs = Object(utils["b" /* getParamValue */])('tensors', node, tensorMap, context); + inputs = inputs.slice(0, n); + return [dist["concat"](inputs, axis)]; + } + case 'GatherV2': + case 'Gather': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const input = Object(utils["b" /* getParamValue */])('x', node, tensorMap, context); + const indices = Object(utils["b" /* getParamValue */])('indices', node, tensorMap, context); + return [dist["gather"](input, indices.asType('int32'), axis)]; + } + case 'ReverseV2': + case 'Reverse': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const input = Object(utils["b" /* getParamValue */])('x', node, tensorMap, context); + return [dist["reverse"](input, axis)]; + } + case 'Slice': { + // tslint:disable-next-line:no-any + const begin = Object(utils["b" /* getParamValue */])('begin', node, tensorMap, context); + // tslint:disable-next-line:no-any + const size = Object(utils["b" /* getParamValue */])('size', node, tensorMap, context); + return [dist["slice"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), begin, size)]; + } + case 'StridedSlice': { + const begin = Object(utils["b" /* getParamValue */])('begin', node, tensorMap, context); + const end = Object(utils["b" /* getParamValue */])('end', node, tensorMap, context); + const strides = Object(utils["b" /* getParamValue */])('strides', node, tensorMap, context); + const beginMask = Object(utils["b" /* getParamValue */])('beginMask', node, tensorMap, context); + const endMask = Object(utils["b" /* getParamValue */])('endMask', node, tensorMap, context); + const ellipsisMask = Object(utils["b" /* getParamValue */])('ellipsisMask', node, tensorMap, context); + const newAxisMask = Object(utils["b" /* getParamValue */])('newAxisMask', node, tensorMap, context); + const shrinkAxisMask = Object(utils["b" /* getParamValue */])('shrinkAxisMask', node, tensorMap, context); + const tensor = Object(utils["b" /* getParamValue */])('x', node, tensorMap, context); + if (begin.length === 1 && tensor.shape.length > 1) { + for (let i = 1; i < tensor.shape.length; i++) { + begin.push(0); + end.push(tensor.shape[i]); + strides.push(strides[0]); + } + } + return [dist["stridedSlice"](tensor, begin, end, strides, beginMask, endMask, ellipsisMask, newAxisMask, shrinkAxisMask)]; + } + case 'Pack': { + return dist["tidy"](() => { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const tensors = Object(utils["b" /* getParamValue */])('tensors', node, tensorMap, context); + // Reshape the tensors to the first tensor's shape if they don't match. + const shape = tensors[0].shape; + const squeezedShape = tensors[0].squeeze().shape; + const mapped = tensors.map(tensor => { + const sameShape = dist["util"].arraysEqual(tensor.shape, shape); + if (!sameShape && + !dist["util"].arraysEqual(tensor.squeeze().shape, squeezedShape)) { + throw new Error('the input tensors shape does not match'); + } + return sameShape ? tensor : tensor.reshape(shape); + }); + return [dist["stack"](mapped, axis)]; + }); + } + case 'Unpack': { + return dist["tidy"](() => { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const tensor = Object(utils["b" /* getParamValue */])('tensor', node, tensorMap, context); + return dist["unstack"](tensor, axis); + }); + } + case 'Tile': { + const reps = Object(utils["b" /* getParamValue */])('reps', node, tensorMap, context); + return [dist["tile"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), reps)]; + } + case 'Split': + case 'SplitV': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + const numOrSizeSplits = Object(utils["b" /* getParamValue */])('numOrSizeSplits', node, tensorMap, context); + return dist["split"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), numOrSizeSplits, axis); + } + case 'ScatterNd': { + const indices = Object(utils["b" /* getParamValue */])('indices', node, tensorMap, context); + const values = Object(utils["b" /* getParamValue */])('values', node, tensorMap, context); + const shape = Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context); + return [dist["scatterND"](indices, values, shape)]; + } + case 'GatherNd': { + const x = Object(utils["b" /* getParamValue */])('x', node, tensorMap, context); + const indices = Object(utils["b" /* getParamValue */])('indices', node, tensorMap, context); + return [dist["gatherND"](x, indices)]; + } + case 'SparseToDense': { + const indices = Object(utils["b" /* getParamValue */])('sparseIndices', node, tensorMap, context); + const shape = Object(utils["b" /* getParamValue */])('outputShape', node, tensorMap, context); + const sparseValues = Object(utils["b" /* getParamValue */])('sparseValues', node, tensorMap, context); + const defaultValue = Object(utils["b" /* getParamValue */])('defaultValue', node, tensorMap, context); + return [dist["sparseToDense"](indices, sparseValues, shape, sparseValues.dtype === defaultValue.dtype ? + defaultValue : + defaultValue.asType(sparseValues.dtype))]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const slice_join_executor_CATEGORY = 'slice_join'; +//# sourceMappingURL=slice_join_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/spectral_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const spectral_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'FFT': { + return [dist["fft"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'IFFT': { + return [dist["ifft"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'RFFT': { + return [dist["rfft"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + case 'IRFFT': { + return [dist["irfft"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context))]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const spectral_executor_CATEGORY = 'spectral'; +//# sourceMappingURL=spectral_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/executors/transformation_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const transformation_executor_executeOp = (node, tensorMap, context) => { + switch (node.op) { + case 'Cast': { + return [dist["cast"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('dtype', node, tensorMap, context))]; + } + case 'ExpandDims': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + return [dist["expandDims"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis)]; + } + case 'Squeeze': { + const axis = Object(utils["b" /* getParamValue */])('axis', node, tensorMap, context); + return [dist["squeeze"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), axis)]; + } + case 'Reshape': { + return [dist["reshape"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context))]; + } + case 'PadV2': + case 'Pad': { + return [dist["pad"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["f" /* split */])(Object(utils["b" /* getParamValue */])('padding', node, tensorMap, context), 2), Object(utils["b" /* getParamValue */])('constantValue', node, tensorMap, context))]; + } + case 'SpaceToBatchND': { + const blockShape = Object(utils["b" /* getParamValue */])('blockShape', node, tensorMap, context); + const paddings = Object(utils["f" /* split */])(Object(utils["b" /* getParamValue */])('paddings', node, tensorMap, context), 2); + return [dist["spaceToBatchND"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), blockShape, paddings)]; + } + case 'BatchToSpaceND': { + const blockShape = Object(utils["b" /* getParamValue */])('blockShape', node, tensorMap, context); + const crops = Object(utils["f" /* split */])(Object(utils["b" /* getParamValue */])('crops', node, tensorMap, context), 2); + return [dist["batchToSpaceND"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), blockShape, crops)]; + } + case 'DepthToSpace': { + const blockSize = Object(utils["b" /* getParamValue */])('blockSize', node, tensorMap, context); + const dataFormat = Object(utils["b" /* getParamValue */])('dataFormat', node, tensorMap, context).toUpperCase(); + return [dist["depthToSpace"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), blockSize, dataFormat)]; + } + case 'BroadcastTo': { + return [dist["broadcastTo"](Object(utils["b" /* getParamValue */])('x', node, tensorMap, context), Object(utils["b" /* getParamValue */])('shape', node, tensorMap, context))]; + } + default: + throw TypeError(`Node type ${node.op} is not implemented`); + } +}; +const transformation_executor_CATEGORY = 'transformation'; +//# sourceMappingURL=transformation_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/operations/operation_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + + + + + + + + + + + + + + + + + +/** + * Executes the op defined by the node object. + * @param node + * @param tensorMap contains tensors for executed nodes and weights + */ +function operation_executor_executeOp(node, tensorMap, context) { + const value = ((node, tensorMap, context) => { + switch (node.category) { + case 'arithmetic': + return dist["tidy"](() => executeOp(node, tensorMap, context)); + case 'basic_math': + return dist["tidy"](() => basic_math_executor_executeOp(node, tensorMap, context)); + case 'control': + return control_executor_executeOp(node, tensorMap, context); + case 'convolution': + return dist["tidy"](() => convolution_executor_executeOp(node, tensorMap, context)); + case 'creation': + return dist["tidy"](() => creation_executor_executeOp(node, tensorMap, context)); + case 'dynamic': + return dynamic_executor_executeOp(node, tensorMap, context); + case 'evaluation': + return dist["tidy"](() => evaluation_executor_executeOp(node, tensorMap, context)); + case 'image': + return dist["tidy"](() => image_executor_executeOp(node, tensorMap, context)); + case 'graph': + return dist["tidy"](() => graph_executor_executeOp(node, tensorMap, context)); + case 'logical': + return dist["tidy"](() => logical_executor_executeOp(node, tensorMap, context)); + case 'matrices': + return dist["tidy"](() => matrices_executor_executeOp(node, tensorMap, context)); + case 'normalization': + return dist["tidy"](() => normalization_executor_executeOp(node, tensorMap, context)); + case 'reduction': + return dist["tidy"](() => reduction_executor_executeOp(node, tensorMap, context)); + case 'slice_join': + return dist["tidy"](() => slice_join_executor_executeOp(node, tensorMap, context)); + case 'spectral': + return dist["tidy"](() => spectral_executor_executeOp(node, tensorMap, context)); + case 'transformation': + return dist["tidy"](() => transformation_executor_executeOp(node, tensorMap, context)); + case 'custom': + const opMapper = Object(register["b" /* getRegisteredOp */])(node.op); + if (opMapper && opMapper.customExecutor) { + return opMapper.customExecutor(new node_value_impl_NodeValueImpl(node, tensorMap, context)); + } + else { + throw TypeError(`Custom op ${node.op} is not registered.`); + } + default: + throw TypeError(`Unknown op '${node.op}'. File an issue at ` + + `https://github.com/tensorflow/tfjs/issues so we can add it` + + `, or register a custom execution with tf.registerOp()`); + } + })(node, tensorMap, context); + if (value instanceof Promise) { + return value.then((data) => [].concat(data)); + } + return [].concat(value); +} +//# sourceMappingURL=operation_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/executor/execution_context.js +/** + * ExecutionContext captures the runtime environment of the node. It keeps + * track of the current frame and iteration for the control flow ops. + * + * For example, typical Dynamic RNN model may contain loops, for which + * TensorFlow will generate graphs with Enter/Exit nodes to control the + * current execution frame, and NextIteration Nodes for iteration id increment. + * For model with branch logic, TensorFLow will generate Switch/Merge ops. + */ +class ExecutionContext { + constructor(weightMap, tensorArrayMap, functionMap = {}) { + this.weightMap = weightMap; + this.tensorArrayMap = tensorArrayMap; + this.functionMap = functionMap; + this.rootContext = { id: 0, frameName: '', iterationId: 0 }; + this.contexts = [this.rootContext]; + this.lastId = 0; + this.generateCurrentContextIds(); + } + newFrame(id, frameName) { + return { id, frameName, iterationId: 0 }; + } + /** + * Set the current context + * @param contexts: ExecutionContextInfo[] the current path of execution + * frames + */ + set currentContext(contexts) { + if (this.contexts !== contexts) { + this.contexts = contexts; + this.generateCurrentContextIds(); + } + } + get currentContext() { + return this.contexts; + } + /** + * Returns the current context in string format. + */ + get currentContextId() { + return this._currentContextIds[0]; + } + /** + * Returns the current context and all parent contexts in string format. + * This allow access to the nodes in the current and parent frames. + */ + get currentContextIds() { + return this._currentContextIds; + } + generateCurrentContextIds() { + const names = []; + for (let i = 0; i < this.contexts.length - 1; i++) { + const contexts = this.contexts.slice(0, this.contexts.length - i); + names.push(this.contextIdforContexts(contexts)); + } + names.push(''); + this._currentContextIds = names; + } + contextIdforContexts(contexts) { + return contexts ? + contexts + .map(context => (context.id === 0 && context.iterationId === 0) ? + '' : + `${context.frameName}-${context.iterationId}`) + .join('/') : + ''; + } + /** + * Enter a new frame, a new context is pushed on the current context list. + * @param frameId new frame id + */ + enterFrame(frameId) { + if (this.contexts) { + this.lastId++; + this.contexts = this.contexts.slice(); + this.contexts.push(this.newFrame(this.lastId, frameId)); + this._currentContextIds.unshift(this.contextIdforContexts(this.contexts)); + } + } + /** + * Exit the current frame, the last context is removed from the current + * context list. + */ + exitFrame() { + if (this.contexts && this.contexts.length > 1) { + this.contexts = this.contexts.slice(); + this.contexts.splice(-1); + this.currentContextIds.shift(); + } + else { + throw new Error('Cannot exit frame, the context is empty'); + } + } + /** + * Enter the next iteration of a loop, the iteration id of last context is + * increased. + */ + nextIteration() { + if (this.contexts && this.contexts.length > 0) { + this.contexts = this.contexts.slice(); + this.lastId++; + const context = Object.assign({}, this.contexts[this.contexts.length - 1]); + context.iterationId += 1; + context.id = this.lastId; + this.contexts.splice(-1, 1, context); + this._currentContextIds.splice(0, 1, this.contextIdforContexts(this.contexts)); + } + else { + throw new Error('Cannot increase frame iteration, the context is empty'); + } + } + getWeight(name) { + return this.weightMap[name]; + } + addTensorArray(tensorArray) { + this.tensorArrayMap[tensorArray.id] = tensorArray; + } + getTensorArray(id) { + return this.tensorArrayMap[id]; + } +} +//# sourceMappingURL=execution_context.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/executor/model_analysis.js +/** + * @license + * Copyright 2019 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + +/** + * Given graph inputs and desired outputs, find the minimal set of nodes + * to execute in order to compute the outputs. In addition return other useful + * info such: + * - Missing inputs needed to compute the output. + * - Whether the subgraph contains dynamic ops (control flow, dynamic shape). + * - Alternative inputs in order to avoid async (dynamic op) execution. + */ +function getExecutionSubgraph(inputs, outputs, weightMap) { + const usedNodes = new Set(); + const missingInputs = []; + let dynamicNode = null; + let syncInputs = null; + // Start with the outputs, going backwards and find all the nodes that are + // needed to compute those outputs. + const seen = new Set(); + const inputNodeNames = Object.keys(inputs).map(name => Object(utils["e" /* parseNodeName */])(name)[0]); + const frontier = [...outputs]; + while (frontier.length > 0) { + const node = frontier.pop(); + if (isControlFlow(node) || isDynamicShape(node)) { + if (dynamicNode == null) { + dynamicNode = node; + syncInputs = dynamicNode.children.map(child => child.name) + .filter(name => usedNodes.has(name)); + } + } + usedNodes.add(node.name); + // Weights are dead end since we already have their values. + if (weightMap[node.name] != null) { + continue; + } + // This node is a dead end since it's one of the user-provided inputs. + if (inputNodeNames.indexOf(node.name) !== -1) { + continue; + } + if (node.inputs.length === 0) { + missingInputs.push(node.name); + continue; + } + node.inputs.forEach(input => { + // Don't add to the frontier if it is already there. + if (seen.has(input.name)) { + return; + } + seen.add(input.name); + frontier.push(input); + }); + } + return { inputs, outputs, usedNodes, missingInputs, dynamicNode, syncInputs }; +} +/** + * Given the execution info, return a list of nodes in topological order that + * need to be executed to compute the output. + */ +function getNodesInTopologicalOrder(graph, weightMap, executionInfo) { + const { usedNodes, inputs } = executionInfo; + const frontier = []; + const inputNodes = Object.keys(inputs) + .map(name => Object(utils["e" /* parseNodeName */])(name)[0]) + .map(name => graph.nodes[name]); + inputNodes.forEach(input => { + if (usedNodes.has(input.name)) { + frontier.push(input); + } + }); + graph.weights.forEach(weight => { + if (usedNodes.has(weight.name)) { + frontier.push(weight); + } + }); + const seen = new Set(); + const orderedNodes = []; + while (frontier.length > 0) { + const node = frontier.pop(); + seen.add(node.name); + if (!weightMap[node.name]) { + orderedNodes.push(node); + } + node.children.forEach(child => { + if (!seen.has(child.name) && usedNodes.has(child.name) && + child.inputs.every(input => seen.has(input.name))) { + frontier.push(child); + } + }); + } + return orderedNodes; +} +const CONTROL_FLOW_OPS = [ + 'Switch', 'Merge', 'Enter', 'Exit', 'NextIteration', 'StatelessIf', + 'StatelessWhile' +]; +const DYNAMIC_SHAPE_OPS = [ + 'NonMaxSuppressionV2', 'NonMaxSuppressionV3', 'NonMaxSuppressionV5', 'Where' +]; +function isControlFlow(node) { + return CONTROL_FLOW_OPS.indexOf(node.op) >= 0; +} +function isDynamicShape(node) { + return DYNAMIC_SHAPE_OPS.indexOf(node.op) >= 0; +} +//# sourceMappingURL=model_analysis.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/executor/graph_executor.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + + + +class graph_executor_GraphExecutor { + /** + * + * @param graph Graph the model or function graph to be executed. + * @param parent When building function exector you need to set the parent + * executor. Since the weights and function executor maps are set at parant + * level, that function executor can access the function maps and weight maps + * through the parent. + */ + constructor(graph, parent) { + this.graph = graph; + this.parent = parent; + this.compiledMap = new Map(); + this._weightMap = {}; + this.SEPERATOR = ','; + this._functions = {}; + this._functionExecutorMap = {}; + this._outputs = graph.outputs; + this._inputs = graph.inputs; + this._signature = graph.signature; + this._functions = graph.functions; + // create sub-graph executors + if (graph.functions != null) { + Object.keys(graph.functions).forEach(name => { + this._functionExecutorMap[name] = + new graph_executor_GraphExecutor(graph.functions[name], this); + }); + } + } + get weightIds() { + return this.parent ? this.parent.weightIds : this._weightIds; + } + get functionExecutorMap() { + return this.parent ? this.parent.functionExecutorMap : + this._functionExecutorMap; + } + get weightMap() { + return this.parent ? this.parent.weightMap : this._weightMap; + } + set weightMap(weightMap) { + const weightIds = Object.keys(weightMap).map(key => weightMap[key].map(tensor => tensor.id)); + this._weightIds = [].concat(...weightIds); + this._weightMap = weightMap; + } + get inputs() { + return this._inputs.map(node => { + return { + name: node.name, + shape: node.attrParams['shape'] ? + node.attrParams['shape'].value : + undefined, + dtype: node.attrParams['dtype'] ? + node.attrParams['dtype'].value : + undefined + }; + }); + } + get outputs() { + return this._outputs.map(node => { + return { + name: node.name, + shape: node.attrParams['shape'] ? + node.attrParams['shape'].value : + undefined, + dtype: node.attrParams['dtype'] ? + node.attrParams['dtype'].value : + undefined + }; + }); + } + get inputNodes() { + return this._inputs.map(node => node.signatureKey || node.name); + } + get outputNodes() { + return this._outputs.map((node) => { + const name = node.signatureKey || node.name; + return node.defaultOutput ? (`${name}:${node.defaultOutput}`) : name; + }); + } + get functions() { + return Object.keys(this._functions).reduce((map, key) => { + map[key] = this._functions[key].signature; + return map; + }, {}); + } + getCompilationKey(inputs, outputs) { + const sortedInputs = inputs.map(node => node.name).sort(); + const sortedOutputs = outputs.map(node => node.name).sort(); + return sortedInputs.join(this.SEPERATOR) + '--' + + sortedOutputs.join(this.SEPERATOR); + } + /** + * Compiles the inference graph and returns the minimal set of nodes that are + * required for execution, in the correct execution order. + */ + compile(inputs, outputs) { + const executionInfo = getExecutionSubgraph(inputs, outputs, this.weightMap); + const { missingInputs, dynamicNode, syncInputs } = executionInfo; + if (dynamicNode != null) { + throw new Error(`This execution contains the node '${dynamicNode.name}', which has ` + + `the dynamic op '${dynamicNode.op}'. Please use ` + + `model.executeAsync() instead. Alternatively, to avoid the ` + + `dynamic ops, specify the inputs [${syncInputs}]`); + } + if (missingInputs.length > 0) { + const outNames = outputs.map(n => n.name); + const inNames = Object.keys(inputs); + throw new Error(`Cannot compute the outputs [${outNames}] from the provided inputs ` + + `[${inNames}]. Missing the following inputs: [${missingInputs}]`); + } + return getNodesInTopologicalOrder(this.graph, this.weightMap, executionInfo); + } + /** + * Executes the inference for given input tensors. + * @param inputs Tensor map for the model inputs, keyed by the input node + * names. + * @param outputs output node name from the Tensorflow model, if no outputs + * are specified, the default outputs of the model would be used. You can + * inspect intermediate nodes of the model by adding them to the outputs + * array. + */ + execute(inputs, outputs) { + inputs = this.mapInputs(inputs); + const names = Object.keys(inputs).sort(); + this.checkInputs(inputs); + this.checkInputShapeAndType(inputs); + outputs = this.mapOutputs(outputs); + this.checkOutputs(outputs); + const inputNodes = names.map(name => this.graph.nodes[Object(utils["e" /* parseNodeName */])(name)[0]]); + const outputNodes = outputs.map(name => this.graph.nodes[Object(utils["e" /* parseNodeName */])(name)[0]]); + const compilationKey = this.getCompilationKey(inputNodes, outputNodes); + // Do nothing if the compiled graph cache contains the input. + let orderedNodes = this.compiledMap.get(compilationKey); + if (orderedNodes == null) { + orderedNodes = this.compile(inputs, outputNodes); + this.compiledMap.set(compilationKey, orderedNodes); + } + const tensorArrayMap = {}; + return Object(dist["tidy"])(() => { + const context = new ExecutionContext(this.weightMap, tensorArrayMap, this.functionExecutorMap); + const tensorsMap = Object.assign({}, this.weightMap); + Object.keys(inputs).forEach(name => { + const [nodeName, index] = Object(utils["e" /* parseNodeName */])(name); + const tensors = []; + tensors[index] = inputs[name]; + tensorsMap[nodeName] = tensors; + }); + const tensorsToKeep = this.getFrozenTensorIds(tensorsMap); + const intermediateTensorConsumerCount = {}; + for (let i = 0; i < orderedNodes.length; i++) { + const node = orderedNodes[i]; + if (!tensorsMap[node.name]) { + const tensors = operation_executor_executeOp(node, tensorsMap, context); + if (tensors instanceof Promise) { + throw new Error(`The execution of the op '${node.op}' returned a promise. ` + + `Please use model.executeAsync() instead.`); + } + tensorsMap[node.name] = tensors; + this.checkTensorForDisposal(node.name, node, tensorsMap, context, tensorsToKeep, outputs, intermediateTensorConsumerCount); + } + } + return outputs.map(name => Object(utils["c" /* getTensor */])(name, tensorsMap, context)); + }); + } + getFrozenTensorIds(tensorMap) { + const ids = [].concat.apply([], Object.keys(tensorMap) + .map(key => tensorMap[key]) + .map(tensors => tensors.map(tensor => tensor.id))); + return new Set(ids); + } + checkTensorForDisposal(nodeName, node, tensorMap, context, tensorsToKeep, outputNames, intermediateTensorConsumerCount) { + // Skip output nodes and any control flow nodes, since its dependency is + // tricky to track correctly. + if (node.category === 'control' || outputNames.indexOf(nodeName) !== -1) { + return; + } + tensorMap[nodeName].forEach(tensor => { + if (tensor != null) { + intermediateTensorConsumerCount[tensor.id] = + (intermediateTensorConsumerCount[tensor.id] || 0) + + node.children.length; + } + }); + node.inputs.forEach(input => { + // Skip any control flow nodes, since its dependency is tricky to track + // correctly. + if (input.category !== 'control') { + const tensors = Object(utils["d" /* getTensorsForCurrentContenxt */])(input.name, tensorMap, context); + if (tensors != null) { + tensors.forEach(tensor => { + if (tensor && !tensorsToKeep.has(tensor.id)) { + const count = intermediateTensorConsumerCount[tensor.id]; + if (count === 1) { + tensor.dispose(); + delete intermediateTensorConsumerCount[tensor.id]; + } + else if (count != null) { + // only intermediate nodes has count set, inputs and weights are + // not. + intermediateTensorConsumerCount[tensor.id]--; + } + } + }); + } + } + }); + } + /** + * Executes the inference for given input tensors in Async fashion. + * @param inputs Tensor map for the model inputs, keyed by the input node + * names. + * @param outputs output node name from the Tensorflow model, if no outputs + * are specified, the default outputs of the model would be used. You can + * inspect intermediate nodes of the model by adding them to the outputs + * array. + * @param disableWarning disable the no dynamic ops warning message, default + * to false + */ + async executeAsync(inputs, outputs, disableWarning = false) { + inputs = this.mapInputs(inputs); + this.checkInputs(inputs); + this.checkInputShapeAndType(inputs); + outputs = this.mapOutputs(outputs); + this.checkOutputs(outputs); + const tensorArrayMap = {}; + const context = new ExecutionContext(this.weightMap, tensorArrayMap, this.functionExecutorMap); + // Graph with control flow op requires runtime evaluation of the execution + // order, while without control flow the execution order is pre-determined + // in the compile method. + const tensorMap = await this.executeWithControlFlow(inputs, context, outputs, disableWarning); + const results = outputs.map(name => Object(utils["c" /* getTensor */])(name, tensorMap, context)); + // dispose all the intermediate tensors + const outputIds = new Set(results.map(t => t.id)); + const inputIds = new Set(Object.keys(inputs).map(name => inputs[name].id)); + Object.keys(tensorMap).forEach(key => { + const tensorArray = tensorMap[key]; + tensorArray.forEach(tensor => { + if (tensor && !tensor.isDisposed && !outputIds.has(tensor.id) && + !inputIds.has(tensor.id) && + this.weightIds.indexOf(tensor.id) === -1) { + tensor.dispose(); + } + }); + }); + return results; + } + async executeFunctionAsync(inputs) { + const mappedInputs = inputs.reduce((map, tensor, index) => { + map[this.inputs[index].name] = tensor; + return map; + }, {}); + return this.executeAsync(mappedInputs, this.outputNodes, true); + } + /** + * When there are control flow nodes in the graph, the graph execution use + * ExecutionContext to keep track of the frames and loop iterators. + * @param inputs placeholder tensors for the graph. + * @param context the execution context object for current execution. + * @param disableWarning disable no async op warning + */ + async executeWithControlFlow(inputs, context, outputNames, disableWarning) { + const names = Object.keys(inputs); + const inputNodes = names.map(name => this.graph.nodes[Object(utils["e" /* parseNodeName */])(name)[0]]); + const outputNodes = outputNames.map(name => this.graph.nodes[Object(utils["e" /* parseNodeName */])(name)[0]]); + const { usedNodes, missingInputs, dynamicNode, syncInputs } = getExecutionSubgraph(inputs, outputNodes, this.weightMap); + const stack = [...inputNodes, ...this.graph.weights].map(node => { + return { node, contexts: context.currentContext }; + }); + const tensorsMap = Object.assign({}, this.weightMap); + Object.keys(inputs).forEach(name => { + const [nodeName, index] = Object(utils["e" /* parseNodeName */])(name); + const tensors = []; + tensors[index] = inputs[name]; + tensorsMap[nodeName] = tensors; + }); + const intermediateTensorConsumerCount = {}; + const tensorsToKeep = this.getFrozenTensorIds(tensorsMap); + const added = {}; + while (stack.length > 0) { + const promises = this.processStack(inputNodes, stack, context, tensorsMap, added, tensorsToKeep, outputNames, intermediateTensorConsumerCount, usedNodes); + await Promise.all(promises); + } + if (dynamicNode == null && !disableWarning) { + console.warn(`This model execution did not contain any nodes with control flow ` + + `or dynamic output shapes. You can use model.execute() instead.`); + } + const missingOutputs = outputNodes + .filter(node => !isControlFlow(node) && + !Object(utils["c" /* getTensor */])(node.name, tensorsMap, context)) + .map(node => node.name); + if (missingOutputs.length > 0) { + let alternativeMsg = ''; + if (dynamicNode != null) { + alternativeMsg = + `Alternatively, to avoid the dynamic ops, use model.execute() ` + + `and specify the inputs [${syncInputs}]`; + } + throw new Error(`Cannot compute the outputs [${missingOutputs}] from the provided ` + + `inputs [${names}]. Consider providing the following inputs: ` + + `[${missingInputs}]. ${alternativeMsg}`); + } + return tensorsMap; + } + processStack(inputNodes, stack, context, tensorMap, added, tensorsToKeep, outputNames, intermediateTensorConsumerCount, usedNodes) { + const promises = []; + while (stack.length > 0) { + const item = stack.pop(); + context.currentContext = item.contexts; + let nodeName = ''; + // The tensor of the Enter op with isConstant set should be set + // in the parent scope, so it will be available as constant for the + // whole loop. + if (item.node.op === 'Enter' && + Object(utils["b" /* getParamValue */])('isConstant', item.node, tensorMap, context)) { + [nodeName] = Object(utils["a" /* getNodeNameAndIndex */])(item.node.name, context); + } + // only process nodes that are not provided as input nodes. + if (inputNodes.indexOf(item.node) === -1) { + const tensors = operation_executor_executeOp(item.node, tensorMap, context); + if (!nodeName) { + [nodeName] = Object(utils["a" /* getNodeNameAndIndex */])(item.node.name, context); + } + const currentContext = context.currentContext; + if (tensors instanceof Promise) { + promises.push(tensors.then(t => { + tensorMap[nodeName] = t; + context.currentContext = currentContext; + this.checkTensorForDisposal(nodeName, item.node, tensorMap, context, tensorsToKeep, outputNames, intermediateTensorConsumerCount); + this.processChildNodes(item.node, stack, context, tensorMap, added, usedNodes); + return t; + })); + } + else { + tensorMap[nodeName] = tensors; + this.checkTensorForDisposal(nodeName, item.node, tensorMap, context, tensorsToKeep, outputNames, intermediateTensorConsumerCount); + this.processChildNodes(item.node, stack, context, tensorMap, added, usedNodes); + } + } + else { + this.processChildNodes(item.node, stack, context, tensorMap, added, usedNodes); + } + } + return promises; + } + processChildNodes(node, stack, context, tensorMap, added, usedNodes) { + node.children.forEach((childNode) => { + const [nodeName,] = Object(utils["a" /* getNodeNameAndIndex */])(childNode.name, context); + if (added[nodeName] || !usedNodes.has(childNode.name)) { + return; + } + // Merge op can be pushed if any of its inputs has value. + if (childNode.op === 'Merge') { + if (childNode.inputNames.some(name => { + return !!Object(utils["c" /* getTensor */])(name, tensorMap, context); + })) { + added[nodeName] = true; + stack.push({ contexts: context.currentContext, node: childNode }); + } + } + else // Otherwise all inputs must to have value. + if (childNode.inputNames.every(name => { + return !!Object(utils["c" /* getTensor */])(name, tensorMap, context); + })) { + added[nodeName] = true; + stack.push({ contexts: context.currentContext, node: childNode }); + } + }); + } + /** + * Releases the memory used by the weight tensors. + */ + dispose() { + Object.keys(this.weightMap) + .forEach(key => this.weightMap[key].forEach(tensor => tensor.dispose())); + } + checkInputShapeAndType(inputs) { + Object.keys(inputs).forEach(name => { + const input = inputs[name]; + const [nodeName,] = Object(utils["e" /* parseNodeName */])(name); + const node = this.graph.nodes[nodeName]; + if (node.attrParams['shape'] && node.attrParams['shape'].value) { + const shape = node.attrParams['shape'].value; + const match = shape.length === input.shape.length && + input.shape.every((dim, index) => shape[index] === -1 || shape[index] === dim); + dist["util"].assert(match, () => `The shape of dict['${node.name}'] provided in ` + + `model.execute(dict) must be [${shape}], but was ` + + `[${input.shape}]`); + } + if (node.attrParams['dtype'] && node.attrParams['dtype'].value) { + dist["util"].assert(input.dtype === node.attrParams['dtype'].value, () => `The dtype of dict['${node.name}'] provided in ` + + `model.execute(dict) must be ` + + `${node.attrParams['dtype'].value}, but was ${input.dtype}`); + } + }); + } + mapInputs(inputs) { + const result = {}; + for (const inputName in inputs) { + if (this._signature != null && this._signature.inputs != null && + this._signature.inputs[inputName] != null) { + const tensor = this._signature.inputs[inputName]; + result[tensor.name] = inputs[inputName]; + } + else { + result[inputName] = inputs[inputName]; + } + } + return result; + } + checkInputs(inputs) { + const notInGraph = Object.keys(inputs).filter(name => { + const [nodeName] = Object(utils["e" /* parseNodeName */])(name); + return this.graph.nodes[nodeName] == null; + }); + if (notInGraph.length > 0) { + throw new Error(`The dict provided in model.execute(dict) has ` + + `keys: [${notInGraph}] that are not part of graph`); + } + } + mapOutputs(outputs) { + return outputs.map(name => { + if (this._signature != null && this._signature.outputs != null && + this._signature.outputs[name] != null) { + const tensor = this._signature.outputs[name]; + return tensor.name; + } + return name; + }, {}); + } + checkOutputs(outputs) { + outputs.forEach(name => { + const [normalizedName] = Object(utils["e" /* parseNodeName */])(name); + if (!this.graph.nodes[normalizedName]) { + throw new Error(`The output '${name}' is not found in the graph`); + } + }); + } +} +//# sourceMappingURL=graph_executor.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/executor/graph_model.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + +const TFHUB_SEARCH_PARAM = '?tfjs-format=file'; +const DEFAULT_MODEL_NAME = 'model.json'; +/** + * A `tf.GraphModel` is a directed, acyclic graph built from a + * SavedModel GraphDef and allows inference execution. + * + * A `tf.GraphModel` can only be created by loading from a model converted from + * a [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) using + * the command line converter tool and loaded via `tf.loadGraphModel`. + */ +/** @doc {heading: 'Models', subheading: 'Classes'} */ +class graph_model_GraphModel { + /** + * @param modelUrl url for the model, or an `io.IOHandler`. + * @param weightManifestUrl url for the weight file generated by + * scripts/convert.py script. + * @param requestOption options for Request, which allows to send credentials + * and custom headers. + * @param onProgress Optional, progress callback function, fired periodically + * before the load is completed. + */ + constructor(modelUrl, loadOptions = {}) { + this.modelUrl = modelUrl; + this.loadOptions = loadOptions; + this.version = 'n/a'; + if (loadOptions == null) { + this.loadOptions = {}; + } + } + // Returns the version information for the tensorflow model GraphDef. + get modelVersion() { + return this.version; + } + get inputNodes() { + return this.executor.inputNodes; + } + get outputNodes() { + return this.executor.outputNodes; + } + get inputs() { + return this.executor.inputs; + } + get outputs() { + return this.executor.outputs; + } + get weights() { + return this.executor.weightMap; + } + findIOHandler() { + const path = this.modelUrl; + if (path.load != null) { + // Path is an IO Handler. + this.handler = path; + } + else if (this.loadOptions.requestInit != null) { + this.handler = dist["io"].browserHTTPRequest(path, this.loadOptions); + } + else { + const handlers = dist["io"].getLoadHandlers(path, this.loadOptions); + if (handlers.length === 0) { + // For backward compatibility: if no load handler can be found, + // assume it is a relative http path. + handlers.push(dist["io"].browserHTTPRequest(path, this.loadOptions)); + } + else if (handlers.length > 1) { + throw new Error(`Found more than one (${handlers.length}) load handlers for ` + + `URL '${[path]}'`); + } + this.handler = handlers[0]; + } + } + /** + * Loads the model and weight files, construct the in memory weight map and + * compile the inference graph. + */ + async load() { + this.findIOHandler(); + if (this.handler.load == null) { + throw new Error('Cannot proceed with model loading because the IOHandler provided ' + + 'does not have the `load` method implemented.'); + } + const artifacts = await this.handler.load(); + return this.loadSync(artifacts); + } + /** + * Synchronously construct the in memory weight map and + * compile the inference graph. + */ + /** @doc {heading: 'Models', subheading: 'Classes', ignoreCI: true} */ + loadSync(artifacts) { + this.artifacts = artifacts; + const graph = this.artifacts.modelTopology; + let signature = {}; + if (this.artifacts.userDefinedMetadata != null) { + signature = // tslint:disable-next-line:no-any + this.artifacts.userDefinedMetadata.signature; + } + this.version = `${graph.versions.producer}.${graph.versions.minConsumer}`; + const weightMap = dist["io"].decodeWeights(this.artifacts.weightData, this.artifacts.weightSpecs); + this.executor = new graph_executor_GraphExecutor(operation_mapper["a" /* OperationMapper */].Instance.transformGraph(graph, signature)); + this.executor.weightMap = this.convertTensorMapToTensorsMap(weightMap); + return true; + } + /** + * Save the configuration and/or weights of the GraphModel. + * + * An `IOHandler` is an object that has a `save` method of the proper + * signature defined. The `save` method manages the storing or + * transmission of serialized data ("artifacts") that represent the + * model's topology and weights onto or via a specific medium, such as + * file downloads, local storage, IndexedDB in the web browser and HTTP + * requests to a server. TensorFlow.js provides `IOHandler` + * implementations for a number of frequently used saving mediums, such as + * `tf.io.browserDownloads` and `tf.io.browserLocalStorage`. See `tf.io` + * for more details. + * + * This method also allows you to refer to certain types of `IOHandler`s + * as URL-like string shortcuts, such as 'localstorage://' and + * 'indexeddb://'. + * + * Example 1: Save `model`'s topology and weights to browser [local + * storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage); + * then load it back. + * + * ```js + * const modelUrl = + * 'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json'; + * const model = await tf.loadGraphModel(modelUrl); + * const zeros = tf.zeros([1, 224, 224, 3]); + * model.predict(zeros).print(); + * + * const saveResults = await model.save('localstorage://my-model-1'); + * + * const loadedModel = await tf.loadGraphModel('localstorage://my-model-1'); + * console.log('Prediction from loaded model:'); + * model.predict(zeros).print(); + * ``` + * + * @param handlerOrURL An instance of `IOHandler` or a URL-like, + * scheme-based string shortcut for `IOHandler`. + * @param config Options for saving the model. + * @returns A `Promise` of `SaveResult`, which summarizes the result of + * the saving, such as byte sizes of the saved artifacts for the model's + * topology and weight values. + */ + /** + * @doc {heading: 'Models', subheading: 'Classes', ignoreCI: true} + */ + async save(handlerOrURL, config) { + if (typeof handlerOrURL === 'string') { + const handlers = dist["io"].getSaveHandlers(handlerOrURL); + if (handlers.length === 0) { + throw new Error(`Cannot find any save handlers for URL '${handlerOrURL}'`); + } + else if (handlers.length > 1) { + throw new Error(`Found more than one (${handlers.length}) save handlers for ` + + `URL '${handlerOrURL}'`); + } + handlerOrURL = handlers[0]; + } + if (handlerOrURL.save == null) { + throw new Error('GraphModel.save() cannot proceed because the IOHandler ' + + 'provided does not have the `save` attribute defined.'); + } + return handlerOrURL.save(this.artifacts); + } + /** + * Execute the inference for the input tensors. + * + * @param input The input tensors, when there is single input for the model, + * inputs param should be a `tf.Tensor`. For models with mutliple inputs, + * inputs params should be in either `tf.Tensor`[] if the input order is + * fixed, or otherwise NamedTensorMap format. + * + * For model with multiple inputs, we recommend you use NamedTensorMap as the + * input type, if you use `tf.Tensor`[], the order of the array needs to + * follow the + * order of inputNodes array. @see {@link GraphModel.inputNodes} + * + * You can also feed any intermediate nodes using the NamedTensorMap as the + * input type. For example, given the graph + * InputNode => Intermediate => OutputNode, + * you can execute the subgraph Intermediate => OutputNode by calling + * model.execute('IntermediateNode' : tf.tensor(...)); + * + * This is useful for models that uses tf.dynamic_rnn, where the intermediate + * state needs to be fed manually. + * + * For batch inference execution, the tensors for each input need to be + * concatenated together. For example with mobilenet, the required input shape + * is [1, 244, 244, 3], which represents the [batch, height, width, channel]. + * If we are provide a batched data of 100 images, the input tensor should be + * in the shape of [100, 244, 244, 3]. + * + * @param config Prediction configuration for specifying the batch size and + * output node names. Currently the batch size option is ignored for graph + * model. + * + * @returns Inference result tensors. The output would be single `tf.Tensor` + * if model has single output node, otherwise Tensor[] or NamedTensorMap[] + * will be returned for model with multiple outputs. + */ + /** @doc {heading: 'Models', subheading: 'Classes'} */ + predict(inputs, config) { + return this.execute(inputs, this.outputNodes); + } + normalizeInputs(inputs) { + if (!(inputs instanceof dist["Tensor"]) && !Array.isArray(inputs)) { + // The input is already a NamedTensorMap. + return inputs; + } + inputs = Array.isArray(inputs) ? inputs : [inputs]; + if (inputs.length !== this.inputNodes.length) { + throw new Error('Input tensor count mismatch,' + + `the graph model has ${this.inputNodes.length} placeholders, ` + + `while there are ${inputs.length} input tensors.`); + } + return this.inputNodes.reduce((map, inputName, i) => { + map[inputName] = inputs[i]; + return map; + }, {}); + } + normalizeOutputs(outputs) { + outputs = outputs || this.outputNodes; + return !Array.isArray(outputs) ? [outputs] : outputs; + } + /** + * Executes inference for the model for given input tensors. + * @param inputs tensor, tensor array or tensor map of the inputs for the + * model, keyed by the input node names. + * @param outputs output node name from the Tensorflow model, if no + * outputs are specified, the default outputs of the model would be used. + * You can inspect intermediate nodes of the model by adding them to the + * outputs array. + * + * @returns A single tensor if provided with a single output or no outputs + * are provided and there is only one default output, otherwise return a + * tensor array. The order of the tensor array is the same as the outputs + * if provided, otherwise the order of outputNodes attribute of the model. + */ + /** @doc {heading: 'Models', subheading: 'Classes'} */ + execute(inputs, outputs) { + inputs = this.normalizeInputs(inputs); + outputs = this.normalizeOutputs(outputs); + const result = this.executor.execute(inputs, outputs); + return result.length > 1 ? result : result[0]; + } + /** + * Executes inference for the model for given input tensors in async + * fashion, use this method when your model contains control flow ops. + * @param inputs tensor, tensor array or tensor map of the inputs for the + * model, keyed by the input node names. + * @param outputs output node name from the Tensorflow model, if no outputs + * are specified, the default outputs of the model would be used. You can + * inspect intermediate nodes of the model by adding them to the outputs + * array. + * + * @returns A Promise of single tensor if provided with a single output or + * no outputs are provided and there is only one default output, otherwise + * return a tensor map. + */ + /** @doc {heading: 'Models', subheading: 'Classes'} */ + async executeAsync(inputs, outputs) { + inputs = this.normalizeInputs(inputs); + outputs = this.normalizeOutputs(outputs); + const result = await this.executor.executeAsync(inputs, outputs); + return result.length > 1 ? result : result[0]; + } + convertTensorMapToTensorsMap(map) { + return Object.keys(map).reduce((newMap, key) => { + newMap[key] = [map[key]]; + return newMap; + }, {}); + } + /** + * Releases the memory used by the weight tensors. + */ + /** @doc {heading: 'Models', subheading: 'Classes'} */ + dispose() { + this.executor.dispose(); + } +} +/** + * Load a graph model given a URL to the model definition. + * + * Example of loading MobileNetV2 from a URL and making a prediction with a + * zeros input: + * + * ```js + * const modelUrl = + * 'https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json'; + * const model = await tf.loadGraphModel(modelUrl); + * const zeros = tf.zeros([1, 224, 224, 3]); + * model.predict(zeros).print(); + * ``` + * + * Example of loading MobileNetV2 from a TF Hub URL and making a prediction with + * a zeros input: + * + * ```js + * const modelUrl = + * 'https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/classification/2'; + * const model = await tf.loadGraphModel(modelUrl, {fromTFHub: true}); + * const zeros = tf.zeros([1, 224, 224, 3]); + * model.predict(zeros).print(); + * ``` + * @param modelUrl The url or an `io.IOHandler` that loads the model. + * @param options Options for the HTTP request, which allows to send credentials + * and custom headers. + */ +/** @doc {heading: 'Models', subheading: 'Loading'} */ +async function loadGraphModel(modelUrl, options = {}) { + if (modelUrl == null) { + throw new Error('modelUrl in loadGraphModel() cannot be null. Please provide a url ' + + 'or an IOHandler that loads the model'); + } + if (options == null) { + options = {}; + } + if (options.fromTFHub) { + if (modelUrl.load == null) { + if (!modelUrl.endsWith('/')) { + modelUrl = modelUrl + '/'; + } + modelUrl = `${modelUrl}${DEFAULT_MODEL_NAME}${TFHUB_SEARCH_PARAM}`; + } + } + const model = new graph_model_GraphModel(modelUrl, options); + await model.load(); + return model; +} +//# sourceMappingURL=graph_model.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/version.js +/** @license See the LICENSE file. */ +// This code is auto-generated, do not modify this file! +const version = '2.0.1'; + +//# sourceMappingURL=version.js.map +// CONCATENATED MODULE: ./node_modules/@tensorflow/tfjs-converter/dist/index.js +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + + +//# sourceMappingURL=index.js.map + +/***/ }), +/* 39 */ +/***/ (function(module, exports, __webpack_require__) { + +"use strict"; +/* WEBPACK VAR INJECTION */(function(global) {/*! + * The buffer module from node.js, for the browser. + * + * @author Feross Aboukhadijeh + * @license MIT + */ +/* eslint-disable no-proto */ + + + +var base64 = __webpack_require__(65) +var ieee754 = __webpack_require__(66) +var isArray = __webpack_require__(67) + +exports.Buffer = Buffer +exports.SlowBuffer = SlowBuffer +exports.INSPECT_MAX_BYTES = 50 + +/** + * If `Buffer.TYPED_ARRAY_SUPPORT`: + * === true Use Uint8Array implementation (fastest) + * === false Use Object implementation (most compatible, even IE6) + * + * Browsers that support typed arrays are IE 10+, Firefox 4+, Chrome 7+, Safari 5.1+, + * Opera 11.6+, iOS 4.2+. + * + * Due to various browser bugs, sometimes the Object implementation will be used even + * when the browser supports typed arrays. + * + * Note: + * + * - Firefox 4-29 lacks support for adding new properties to `Uint8Array` instances, + * See: https://bugzilla.mozilla.org/show_bug.cgi?id=695438. + * + * - Chrome 9-10 is missing the `TypedArray.prototype.subarray` function. + * + * - IE10 has a broken `TypedArray.prototype.subarray` function which returns arrays of + * incorrect length in some situations. + + * We detect these buggy browsers and set `Buffer.TYPED_ARRAY_SUPPORT` to `false` so they + * get the Object implementation, which is slower but behaves correctly. + */ +Buffer.TYPED_ARRAY_SUPPORT = global.TYPED_ARRAY_SUPPORT !== undefined + ? global.TYPED_ARRAY_SUPPORT + : typedArraySupport() + +/* + * Export kMaxLength after typed array support is determined. + */ +exports.kMaxLength = kMaxLength() + +function typedArraySupport () { + try { + var arr = new Uint8Array(1) + arr.__proto__ = {__proto__: Uint8Array.prototype, foo: function () { return 42 }} + return arr.foo() === 42 && // typed array instances can be augmented + typeof arr.subarray === 'function' && // chrome 9-10 lack `subarray` + arr.subarray(1, 1).byteLength === 0 // ie10 has broken `subarray` + } catch (e) { + return false + } +} + +function kMaxLength () { + return Buffer.TYPED_ARRAY_SUPPORT + ? 0x7fffffff + : 0x3fffffff +} + +function createBuffer (that, length) { + if (kMaxLength() < length) { + throw new RangeError('Invalid typed array length') + } + if (Buffer.TYPED_ARRAY_SUPPORT) { + // Return an augmented `Uint8Array` instance, for best performance + that = new Uint8Array(length) + that.__proto__ = Buffer.prototype + } else { + // Fallback: Return an object instance of the Buffer class + if (that === null) { + that = new Buffer(length) + } + that.length = length + } + + return that +} + +/** + * The Buffer constructor returns instances of `Uint8Array` that have their + * prototype changed to `Buffer.prototype`. Furthermore, `Buffer` is a subclass of + * `Uint8Array`, so the returned instances will have all the node `Buffer` methods + * and the `Uint8Array` methods. Square bracket notation works as expected -- it + * returns a single octet. + * + * The `Uint8Array` prototype remains unmodified. + */ + +function Buffer (arg, encodingOrOffset, length) { + if (!Buffer.TYPED_ARRAY_SUPPORT && !(this instanceof Buffer)) { + return new Buffer(arg, encodingOrOffset, length) + } + + // Common case. + if (typeof arg === 'number') { + if (typeof encodingOrOffset === 'string') { + throw new Error( + 'If encoding is specified then the first argument must be a string' + ) + } + return allocUnsafe(this, arg) + } + return from(this, arg, encodingOrOffset, length) +} + +Buffer.poolSize = 8192 // not used by this implementation + +// TODO: Legacy, not needed anymore. Remove in next major version. +Buffer._augment = function (arr) { + arr.__proto__ = Buffer.prototype + return arr +} + +function from (that, value, encodingOrOffset, length) { + if (typeof value === 'number') { + throw new TypeError('"value" argument must not be a number') + } + + if (typeof ArrayBuffer !== 'undefined' && value instanceof ArrayBuffer) { + return fromArrayBuffer(that, value, encodingOrOffset, length) + } + + if (typeof value === 'string') { + return fromString(that, value, encodingOrOffset) + } + + return fromObject(that, value) +} + +/** + * Functionally equivalent to Buffer(arg, encoding) but throws a TypeError + * if value is a number. + * Buffer.from(str[, encoding]) + * Buffer.from(array) + * Buffer.from(buffer) + * Buffer.from(arrayBuffer[, byteOffset[, length]]) + **/ +Buffer.from = function (value, encodingOrOffset, length) { + return from(null, value, encodingOrOffset, length) +} + +if (Buffer.TYPED_ARRAY_SUPPORT) { + Buffer.prototype.__proto__ = Uint8Array.prototype + Buffer.__proto__ = Uint8Array + if (typeof Symbol !== 'undefined' && Symbol.species && + Buffer[Symbol.species] === Buffer) { + // Fix subarray() in ES2016. See: https://github.com/feross/buffer/pull/97 + Object.defineProperty(Buffer, Symbol.species, { + value: null, + configurable: true + }) + } +} + +function assertSize (size) { + if (typeof size !== 'number') { + throw new TypeError('"size" argument must be a number') + } else if (size < 0) { + throw new RangeError('"size" argument must not be negative') + } +} + +function alloc (that, size, fill, encoding) { + assertSize(size) + if (size <= 0) { + return createBuffer(that, size) + } + if (fill !== undefined) { + // Only pay attention to encoding if it's a string. This + // prevents accidentally sending in a number that would + // be interpretted as a start offset. + return typeof encoding === 'string' + ? createBuffer(that, size).fill(fill, encoding) + : createBuffer(that, size).fill(fill) + } + return createBuffer(that, size) +} + +/** + * Creates a new filled Buffer instance. + * alloc(size[, fill[, encoding]]) + **/ +Buffer.alloc = function (size, fill, encoding) { + return alloc(null, size, fill, encoding) +} + +function allocUnsafe (that, size) { + assertSize(size) + that = createBuffer(that, size < 0 ? 0 : checked(size) | 0) + if (!Buffer.TYPED_ARRAY_SUPPORT) { + for (var i = 0; i < size; ++i) { + that[i] = 0 + } + } + return that +} + +/** + * Equivalent to Buffer(num), by default creates a non-zero-filled Buffer instance. + * */ +Buffer.allocUnsafe = function (size) { + return allocUnsafe(null, size) +} +/** + * Equivalent to SlowBuffer(num), by default creates a non-zero-filled Buffer instance. + */ +Buffer.allocUnsafeSlow = function (size) { + return allocUnsafe(null, size) +} + +function fromString (that, string, encoding) { + if (typeof encoding !== 'string' || encoding === '') { + encoding = 'utf8' + } + + if (!Buffer.isEncoding(encoding)) { + throw new TypeError('"encoding" must be a valid string encoding') + } + + var length = byteLength(string, encoding) | 0 + that = createBuffer(that, length) + + var actual = that.write(string, encoding) + + if (actual !== length) { + // Writing a hex string, for example, that contains invalid characters will + // cause everything after the first invalid character to be ignored. (e.g. + // 'abxxcd' will be treated as 'ab') + that = that.slice(0, actual) + } + + return that +} + +function fromArrayLike (that, array) { + var length = array.length < 0 ? 0 : checked(array.length) | 0 + that = createBuffer(that, length) + for (var i = 0; i < length; i += 1) { + that[i] = array[i] & 255 + } + return that +} + +function fromArrayBuffer (that, array, byteOffset, length) { + array.byteLength // this throws if `array` is not a valid ArrayBuffer + + if (byteOffset < 0 || array.byteLength < byteOffset) { + throw new RangeError('\'offset\' is out of bounds') + } + + if (array.byteLength < byteOffset + (length || 0)) { + throw new RangeError('\'length\' is out of bounds') + } + + if (byteOffset === undefined && length === undefined) { + array = new Uint8Array(array) + } else if (length === undefined) { + array = new Uint8Array(array, byteOffset) + } else { + array = new Uint8Array(array, byteOffset, length) + } + + if (Buffer.TYPED_ARRAY_SUPPORT) { + // Return an augmented `Uint8Array` instance, for best performance + that = array + that.__proto__ = Buffer.prototype + } else { + // Fallback: Return an object instance of the Buffer class + that = fromArrayLike(that, array) + } + return that +} + +function fromObject (that, obj) { + if (Buffer.isBuffer(obj)) { + var len = checked(obj.length) | 0 + that = createBuffer(that, len) + + if (that.length === 0) { + return that + } + + obj.copy(that, 0, 0, len) + return that + } + + if (obj) { + if ((typeof ArrayBuffer !== 'undefined' && + obj.buffer instanceof ArrayBuffer) || 'length' in obj) { + if (typeof obj.length !== 'number' || isnan(obj.length)) { + return createBuffer(that, 0) + } + return fromArrayLike(that, obj) + } + + if (obj.type === 'Buffer' && isArray(obj.data)) { + return fromArrayLike(that, obj.data) + } + } + + throw new TypeError('First argument must be a string, Buffer, ArrayBuffer, Array, or array-like object.') +} + +function checked (length) { + // Note: cannot use `length < kMaxLength()` here because that fails when + // length is NaN (which is otherwise coerced to zero.) + if (length >= kMaxLength()) { + throw new RangeError('Attempt to allocate Buffer larger than maximum ' + + 'size: 0x' + kMaxLength().toString(16) + ' bytes') + } + return length | 0 +} + +function SlowBuffer (length) { + if (+length != length) { // eslint-disable-line eqeqeq + length = 0 + } + return Buffer.alloc(+length) +} + +Buffer.isBuffer = function isBuffer (b) { + return !!(b != null && b._isBuffer) +} + +Buffer.compare = function compare (a, b) { + if (!Buffer.isBuffer(a) || !Buffer.isBuffer(b)) { + throw new TypeError('Arguments must be Buffers') + } + + if (a === b) return 0 + + var x = a.length + var y = b.length + + for (var i = 0, len = Math.min(x, y); i < len; ++i) { + if (a[i] !== b[i]) { + x = a[i] + y = b[i] + break + } + } + + if (x < y) return -1 + if (y < x) return 1 + return 0 +} + +Buffer.isEncoding = function isEncoding (encoding) { + switch (String(encoding).toLowerCase()) { + case 'hex': + case 'utf8': + case 'utf-8': + case 'ascii': + case 'latin1': + case 'binary': + case 'base64': + case 'ucs2': + case 'ucs-2': + case 'utf16le': + case 'utf-16le': + return true + default: + return false + } +} + +Buffer.concat = function concat (list, length) { + if (!isArray(list)) { + throw new TypeError('"list" argument must be an Array of Buffers') + } + + if (list.length === 0) { + return Buffer.alloc(0) + } + + var i + if (length === undefined) { + length = 0 + for (i = 0; i < list.length; ++i) { + length += list[i].length + } + } + + var buffer = Buffer.allocUnsafe(length) + var pos = 0 + for (i = 0; i < list.length; ++i) { + var buf = list[i] + if (!Buffer.isBuffer(buf)) { + throw new TypeError('"list" argument must be an Array of Buffers') + } + buf.copy(buffer, pos) + pos += buf.length + } + return buffer +} + +function byteLength (string, encoding) { + if (Buffer.isBuffer(string)) { + return string.length + } + if (typeof ArrayBuffer !== 'undefined' && typeof ArrayBuffer.isView === 'function' && + (ArrayBuffer.isView(string) || string instanceof ArrayBuffer)) { + return string.byteLength + } + if (typeof string !== 'string') { + string = '' + string + } + + var len = string.length + if (len === 0) return 0 + + // Use a for loop to avoid recursion + var loweredCase = false + for (;;) { + switch (encoding) { + case 'ascii': + case 'latin1': + case 'binary': + return len + case 'utf8': + case 'utf-8': + case undefined: + return utf8ToBytes(string).length + case 'ucs2': + case 'ucs-2': + case 'utf16le': + case 'utf-16le': + return len * 2 + case 'hex': + return len >>> 1 + case 'base64': + return base64ToBytes(string).length + default: + if (loweredCase) return utf8ToBytes(string).length // assume utf8 + encoding = ('' + encoding).toLowerCase() + loweredCase = true + } + } +} +Buffer.byteLength = byteLength + +function slowToString (encoding, start, end) { + var loweredCase = false + + // No need to verify that "this.length <= MAX_UINT32" since it's a read-only + // property of a typed array. + + // This behaves neither like String nor Uint8Array in that we set start/end + // to their upper/lower bounds if the value passed is out of range. + // undefined is handled specially as per ECMA-262 6th Edition, + // Section 13.3.3.7 Runtime Semantics: KeyedBindingInitialization. + if (start === undefined || start < 0) { + start = 0 + } + // Return early if start > this.length. Done here to prevent potential uint32 + // coercion fail below. + if (start > this.length) { + return '' + } + + if (end === undefined || end > this.length) { + end = this.length + } + + if (end <= 0) { + return '' + } + + // Force coersion to uint32. This will also coerce falsey/NaN values to 0. + end >>>= 0 + start >>>= 0 + + if (end <= start) { + return '' + } + + if (!encoding) encoding = 'utf8' + + while (true) { + switch (encoding) { + case 'hex': + return hexSlice(this, start, end) + + case 'utf8': + case 'utf-8': + return utf8Slice(this, start, end) + + case 'ascii': + return asciiSlice(this, start, end) + + case 'latin1': + case 'binary': + return latin1Slice(this, start, end) + + case 'base64': + return base64Slice(this, start, end) + + case 'ucs2': + case 'ucs-2': + case 'utf16le': + case 'utf-16le': + return utf16leSlice(this, start, end) + + default: + if (loweredCase) throw new TypeError('Unknown encoding: ' + encoding) + encoding = (encoding + '').toLowerCase() + loweredCase = true + } + } +} + +// The property is used by `Buffer.isBuffer` and `is-buffer` (in Safari 5-7) to detect +// Buffer instances. +Buffer.prototype._isBuffer = true + +function swap (b, n, m) { + var i = b[n] + b[n] = b[m] + b[m] = i +} + +Buffer.prototype.swap16 = function swap16 () { + var len = this.length + if (len % 2 !== 0) { + throw new RangeError('Buffer size must be a multiple of 16-bits') + } + for (var i = 0; i < len; i += 2) { + swap(this, i, i + 1) + } + return this +} + +Buffer.prototype.swap32 = function swap32 () { + var len = this.length + if (len % 4 !== 0) { + throw new RangeError('Buffer size must be a multiple of 32-bits') + } + for (var i = 0; i < len; i += 4) { + swap(this, i, i + 3) + swap(this, i + 1, i + 2) + } + return this +} + +Buffer.prototype.swap64 = function swap64 () { + var len = this.length + if (len % 8 !== 0) { + throw new RangeError('Buffer size must be a multiple of 64-bits') + } + for (var i = 0; i < len; i += 8) { + swap(this, i, i + 7) + swap(this, i + 1, i + 6) + swap(this, i + 2, i + 5) + swap(this, i + 3, i + 4) + } + return this +} + +Buffer.prototype.toString = function toString () { + var length = this.length | 0 + if (length === 0) return '' + if (arguments.length === 0) return utf8Slice(this, 0, length) + return slowToString.apply(this, arguments) +} + +Buffer.prototype.equals = function equals (b) { + if (!Buffer.isBuffer(b)) throw new TypeError('Argument must be a Buffer') + if (this === b) return true + return Buffer.compare(this, b) === 0 +} + +Buffer.prototype.inspect = function inspect () { + var str = '' + var max = exports.INSPECT_MAX_BYTES + if (this.length > 0) { + str = this.toString('hex', 0, max).match(/.{2}/g).join(' ') + if (this.length > max) str += ' ... ' + } + return '' +} + +Buffer.prototype.compare = function compare (target, start, end, thisStart, thisEnd) { + if (!Buffer.isBuffer(target)) { + throw new TypeError('Argument must be a Buffer') + } + + if (start === undefined) { + start = 0 + } + if (end === undefined) { + end = target ? target.length : 0 + } + if (thisStart === undefined) { + thisStart = 0 + } + if (thisEnd === undefined) { + thisEnd = this.length + } + + if (start < 0 || end > target.length || thisStart < 0 || thisEnd > this.length) { + throw new RangeError('out of range index') + } + + if (thisStart >= thisEnd && start >= end) { + return 0 + } + if (thisStart >= thisEnd) { + return -1 + } + if (start >= end) { + return 1 + } + + start >>>= 0 + end >>>= 0 + thisStart >>>= 0 + thisEnd >>>= 0 + + if (this === target) return 0 + + var x = thisEnd - thisStart + var y = end - start + var len = Math.min(x, y) + + var thisCopy = this.slice(thisStart, thisEnd) + var targetCopy = target.slice(start, end) + + for (var i = 0; i < len; ++i) { + if (thisCopy[i] !== targetCopy[i]) { + x = thisCopy[i] + y = targetCopy[i] + break + } + } + + if (x < y) return -1 + if (y < x) return 1 + return 0 +} + +// Finds either the first index of `val` in `buffer` at offset >= `byteOffset`, +// OR the last index of `val` in `buffer` at offset <= `byteOffset`. +// +// Arguments: +// - buffer - a Buffer to search +// - val - a string, Buffer, or number +// - byteOffset - an index into `buffer`; will be clamped to an int32 +// - encoding - an optional encoding, relevant is val is a string +// - dir - true for indexOf, false for lastIndexOf +function bidirectionalIndexOf (buffer, val, byteOffset, encoding, dir) { + // Empty buffer means no match + if (buffer.length === 0) return -1 + + // Normalize byteOffset + if (typeof byteOffset === 'string') { + encoding = byteOffset + byteOffset = 0 + } else if (byteOffset > 0x7fffffff) { + byteOffset = 0x7fffffff + } else if (byteOffset < -0x80000000) { + byteOffset = -0x80000000 + } + byteOffset = +byteOffset // Coerce to Number. + if (isNaN(byteOffset)) { + // byteOffset: it it's undefined, null, NaN, "foo", etc, search whole buffer + byteOffset = dir ? 0 : (buffer.length - 1) + } + + // Normalize byteOffset: negative offsets start from the end of the buffer + if (byteOffset < 0) byteOffset = buffer.length + byteOffset + if (byteOffset >= buffer.length) { + if (dir) return -1 + else byteOffset = buffer.length - 1 + } else if (byteOffset < 0) { + if (dir) byteOffset = 0 + else return -1 + } + + // Normalize val + if (typeof val === 'string') { + val = Buffer.from(val, encoding) + } + + // Finally, search either indexOf (if dir is true) or lastIndexOf + if (Buffer.isBuffer(val)) { + // Special case: looking for empty string/buffer always fails + if (val.length === 0) { + return -1 + } + return arrayIndexOf(buffer, val, byteOffset, encoding, dir) + } else if (typeof val === 'number') { + val = val & 0xFF // Search for a byte value [0-255] + if (Buffer.TYPED_ARRAY_SUPPORT && + typeof Uint8Array.prototype.indexOf === 'function') { + if (dir) { + return Uint8Array.prototype.indexOf.call(buffer, val, byteOffset) + } else { + return Uint8Array.prototype.lastIndexOf.call(buffer, val, byteOffset) + } + } + return arrayIndexOf(buffer, [ val ], byteOffset, encoding, dir) + } + + throw new TypeError('val must be string, number or Buffer') +} + +function arrayIndexOf (arr, val, byteOffset, encoding, dir) { + var indexSize = 1 + var arrLength = arr.length + var valLength = val.length + + if (encoding !== undefined) { + encoding = String(encoding).toLowerCase() + if (encoding === 'ucs2' || encoding === 'ucs-2' || + encoding === 'utf16le' || encoding === 'utf-16le') { + if (arr.length < 2 || val.length < 2) { + return -1 + } + indexSize = 2 + arrLength /= 2 + valLength /= 2 + byteOffset /= 2 + } + } + + function read (buf, i) { + if (indexSize === 1) { + return buf[i] + } else { + return buf.readUInt16BE(i * indexSize) + } + } + + var i + if (dir) { + var foundIndex = -1 + for (i = byteOffset; i < arrLength; i++) { + if (read(arr, i) === read(val, foundIndex === -1 ? 0 : i - foundIndex)) { + if (foundIndex === -1) foundIndex = i + if (i - foundIndex + 1 === valLength) return foundIndex * indexSize + } else { + if (foundIndex !== -1) i -= i - foundIndex + foundIndex = -1 + } + } + } else { + if (byteOffset + valLength > arrLength) byteOffset = arrLength - valLength + for (i = byteOffset; i >= 0; i--) { + var found = true + for (var j = 0; j < valLength; j++) { + if (read(arr, i + j) !== read(val, j)) { + found = false + break + } + } + if (found) return i + } + } + + return -1 +} + +Buffer.prototype.includes = function includes (val, byteOffset, encoding) { + return this.indexOf(val, byteOffset, encoding) !== -1 +} + +Buffer.prototype.indexOf = function indexOf (val, byteOffset, encoding) { + return bidirectionalIndexOf(this, val, byteOffset, encoding, true) +} + +Buffer.prototype.lastIndexOf = function lastIndexOf (val, byteOffset, encoding) { + return bidirectionalIndexOf(this, val, byteOffset, encoding, false) +} + +function hexWrite (buf, string, offset, length) { + offset = Number(offset) || 0 + var remaining = buf.length - offset + if (!length) { + length = remaining + } else { + length = Number(length) + if (length > remaining) { + length = remaining + } + } + + // must be an even number of digits + var strLen = string.length + if (strLen % 2 !== 0) throw new TypeError('Invalid hex string') + + if (length > strLen / 2) { + length = strLen / 2 + } + for (var i = 0; i < length; ++i) { + var parsed = parseInt(string.substr(i * 2, 2), 16) + if (isNaN(parsed)) return i + buf[offset + i] = parsed + } + return i +} + +function utf8Write (buf, string, offset, length) { + return blitBuffer(utf8ToBytes(string, buf.length - offset), buf, offset, length) +} + +function asciiWrite (buf, string, offset, length) { + return blitBuffer(asciiToBytes(string), buf, offset, length) +} + +function latin1Write (buf, string, offset, length) { + return asciiWrite(buf, string, offset, length) +} + +function base64Write (buf, string, offset, length) { + return blitBuffer(base64ToBytes(string), buf, offset, length) +} + +function ucs2Write (buf, string, offset, length) { + return blitBuffer(utf16leToBytes(string, buf.length - offset), buf, offset, length) +} + +Buffer.prototype.write = function write (string, offset, length, encoding) { + // Buffer#write(string) + if (offset === undefined) { + encoding = 'utf8' + length = this.length + offset = 0 + // Buffer#write(string, encoding) + } else if (length === undefined && typeof offset === 'string') { + encoding = offset + length = this.length + offset = 0 + // Buffer#write(string, offset[, length][, encoding]) + } else if (isFinite(offset)) { + offset = offset | 0 + if (isFinite(length)) { + length = length | 0 + if (encoding === undefined) encoding = 'utf8' + } else { + encoding = length + length = undefined + } + // legacy write(string, encoding, offset, length) - remove in v0.13 + } else { + throw new Error( + 'Buffer.write(string, encoding, offset[, length]) is no longer supported' + ) + } + + var remaining = this.length - offset + if (length === undefined || length > remaining) length = remaining + + if ((string.length > 0 && (length < 0 || offset < 0)) || offset > this.length) { + throw new RangeError('Attempt to write outside buffer bounds') + } + + if (!encoding) encoding = 'utf8' + + var loweredCase = false + for (;;) { + switch (encoding) { + case 'hex': + return hexWrite(this, string, offset, length) + + case 'utf8': + case 'utf-8': + return utf8Write(this, string, offset, length) + + case 'ascii': + return asciiWrite(this, string, offset, length) + + case 'latin1': + case 'binary': + return latin1Write(this, string, offset, length) + + case 'base64': + // Warning: maxLength not taken into account in base64Write + return base64Write(this, string, offset, length) + + case 'ucs2': + case 'ucs-2': + case 'utf16le': + case 'utf-16le': + return ucs2Write(this, string, offset, length) + + default: + if (loweredCase) throw new TypeError('Unknown encoding: ' + encoding) + encoding = ('' + encoding).toLowerCase() + loweredCase = true + } + } +} + +Buffer.prototype.toJSON = function toJSON () { + return { + type: 'Buffer', + data: Array.prototype.slice.call(this._arr || this, 0) + } +} + +function base64Slice (buf, start, end) { + if (start === 0 && end === buf.length) { + return base64.fromByteArray(buf) + } else { + return base64.fromByteArray(buf.slice(start, end)) + } +} + +function utf8Slice (buf, start, end) { + end = Math.min(buf.length, end) + var res = [] + + var i = start + while (i < end) { + var firstByte = buf[i] + var codePoint = null + var bytesPerSequence = (firstByte > 0xEF) ? 4 + : (firstByte > 0xDF) ? 3 + : (firstByte > 0xBF) ? 2 + : 1 + + if (i + bytesPerSequence <= end) { + var secondByte, thirdByte, fourthByte, tempCodePoint + + switch (bytesPerSequence) { + case 1: + if (firstByte < 0x80) { + codePoint = firstByte + } + break + case 2: + secondByte = buf[i + 1] + if ((secondByte & 0xC0) === 0x80) { + tempCodePoint = (firstByte & 0x1F) << 0x6 | (secondByte & 0x3F) + if (tempCodePoint > 0x7F) { + codePoint = tempCodePoint + } + } + break + case 3: + secondByte = buf[i + 1] + thirdByte = buf[i + 2] + if ((secondByte & 0xC0) === 0x80 && (thirdByte & 0xC0) === 0x80) { + tempCodePoint = (firstByte & 0xF) << 0xC | (secondByte & 0x3F) << 0x6 | (thirdByte & 0x3F) + if (tempCodePoint > 0x7FF && (tempCodePoint < 0xD800 || tempCodePoint > 0xDFFF)) { + codePoint = tempCodePoint + } + } + break + case 4: + secondByte = buf[i + 1] + thirdByte = buf[i + 2] + fourthByte = buf[i + 3] + if ((secondByte & 0xC0) === 0x80 && (thirdByte & 0xC0) === 0x80 && (fourthByte & 0xC0) === 0x80) { + tempCodePoint = (firstByte & 0xF) << 0x12 | (secondByte & 0x3F) << 0xC | (thirdByte & 0x3F) << 0x6 | (fourthByte & 0x3F) + if (tempCodePoint > 0xFFFF && tempCodePoint < 0x110000) { + codePoint = tempCodePoint + } + } + } + } + + if (codePoint === null) { + // we did not generate a valid codePoint so insert a + // replacement char (U+FFFD) and advance only 1 byte + codePoint = 0xFFFD + bytesPerSequence = 1 + } else if (codePoint > 0xFFFF) { + // encode to utf16 (surrogate pair dance) + codePoint -= 0x10000 + res.push(codePoint >>> 10 & 0x3FF | 0xD800) + codePoint = 0xDC00 | codePoint & 0x3FF + } + + res.push(codePoint) + i += bytesPerSequence + } + + return decodeCodePointsArray(res) +} + +// Based on http://stackoverflow.com/a/22747272/680742, the browser with +// the lowest limit is Chrome, with 0x10000 args. +// We go 1 magnitude less, for safety +var MAX_ARGUMENTS_LENGTH = 0x1000 + +function decodeCodePointsArray (codePoints) { + var len = codePoints.length + if (len <= MAX_ARGUMENTS_LENGTH) { + return String.fromCharCode.apply(String, codePoints) // avoid extra slice() + } + + // Decode in chunks to avoid "call stack size exceeded". + var res = '' + var i = 0 + while (i < len) { + res += String.fromCharCode.apply( + String, + codePoints.slice(i, i += MAX_ARGUMENTS_LENGTH) + ) + } + return res +} + +function asciiSlice (buf, start, end) { + var ret = '' + end = Math.min(buf.length, end) + + for (var i = start; i < end; ++i) { + ret += String.fromCharCode(buf[i] & 0x7F) + } + return ret +} + +function latin1Slice (buf, start, end) { + var ret = '' + end = Math.min(buf.length, end) + + for (var i = start; i < end; ++i) { + ret += String.fromCharCode(buf[i]) + } + return ret +} + +function hexSlice (buf, start, end) { + var len = buf.length + + if (!start || start < 0) start = 0 + if (!end || end < 0 || end > len) end = len + + var out = '' + for (var i = start; i < end; ++i) { + out += toHex(buf[i]) + } + return out +} + +function utf16leSlice (buf, start, end) { + var bytes = buf.slice(start, end) + var res = '' + for (var i = 0; i < bytes.length; i += 2) { + res += String.fromCharCode(bytes[i] + bytes[i + 1] * 256) + } + return res +} + +Buffer.prototype.slice = function slice (start, end) { + var len = this.length + start = ~~start + end = end === undefined ? len : ~~end + + if (start < 0) { + start += len + if (start < 0) start = 0 + } else if (start > len) { + start = len + } + + if (end < 0) { + end += len + if (end < 0) end = 0 + } else if (end > len) { + end = len + } + + if (end < start) end = start + + var newBuf + if (Buffer.TYPED_ARRAY_SUPPORT) { + newBuf = this.subarray(start, end) + newBuf.__proto__ = Buffer.prototype + } else { + var sliceLen = end - start + newBuf = new Buffer(sliceLen, undefined) + for (var i = 0; i < sliceLen; ++i) { + newBuf[i] = this[i + start] + } + } + + return newBuf +} + +/* + * Need to make sure that buffer isn't trying to write out of bounds. + */ +function checkOffset (offset, ext, length) { + if ((offset % 1) !== 0 || offset < 0) throw new RangeError('offset is not uint') + if (offset + ext > length) throw new RangeError('Trying to access beyond buffer length') +} + +Buffer.prototype.readUIntLE = function readUIntLE (offset, byteLength, noAssert) { + offset = offset | 0 + byteLength = byteLength | 0 + if (!noAssert) checkOffset(offset, byteLength, this.length) + + var val = this[offset] + var mul = 1 + var i = 0 + while (++i < byteLength && (mul *= 0x100)) { + val += this[offset + i] * mul + } + + return val +} + +Buffer.prototype.readUIntBE = function readUIntBE (offset, byteLength, noAssert) { + offset = offset | 0 + byteLength = byteLength | 0 + if (!noAssert) { + checkOffset(offset, byteLength, this.length) + } + + var val = this[offset + --byteLength] + var mul = 1 + while (byteLength > 0 && (mul *= 0x100)) { + val += this[offset + --byteLength] * mul + } + + return val +} + +Buffer.prototype.readUInt8 = function readUInt8 (offset, noAssert) { + if (!noAssert) checkOffset(offset, 1, this.length) + return this[offset] +} + +Buffer.prototype.readUInt16LE = function readUInt16LE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 2, this.length) + return this[offset] | (this[offset + 1] << 8) +} + +Buffer.prototype.readUInt16BE = function readUInt16BE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 2, this.length) + return (this[offset] << 8) | this[offset + 1] +} + +Buffer.prototype.readUInt32LE = function readUInt32LE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 4, this.length) + + return ((this[offset]) | + (this[offset + 1] << 8) | + (this[offset + 2] << 16)) + + (this[offset + 3] * 0x1000000) +} + +Buffer.prototype.readUInt32BE = function readUInt32BE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 4, this.length) + + return (this[offset] * 0x1000000) + + ((this[offset + 1] << 16) | + (this[offset + 2] << 8) | + this[offset + 3]) +} + +Buffer.prototype.readIntLE = function readIntLE (offset, byteLength, noAssert) { + offset = offset | 0 + byteLength = byteLength | 0 + if (!noAssert) checkOffset(offset, byteLength, this.length) + + var val = this[offset] + var mul = 1 + var i = 0 + while (++i < byteLength && (mul *= 0x100)) { + val += this[offset + i] * mul + } + mul *= 0x80 + + if (val >= mul) val -= Math.pow(2, 8 * byteLength) + + return val +} + +Buffer.prototype.readIntBE = function readIntBE (offset, byteLength, noAssert) { + offset = offset | 0 + byteLength = byteLength | 0 + if (!noAssert) checkOffset(offset, byteLength, this.length) + + var i = byteLength + var mul = 1 + var val = this[offset + --i] + while (i > 0 && (mul *= 0x100)) { + val += this[offset + --i] * mul + } + mul *= 0x80 + + if (val >= mul) val -= Math.pow(2, 8 * byteLength) + + return val +} + +Buffer.prototype.readInt8 = function readInt8 (offset, noAssert) { + if (!noAssert) checkOffset(offset, 1, this.length) + if (!(this[offset] & 0x80)) return (this[offset]) + return ((0xff - this[offset] + 1) * -1) +} + +Buffer.prototype.readInt16LE = function readInt16LE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 2, this.length) + var val = this[offset] | (this[offset + 1] << 8) + return (val & 0x8000) ? val | 0xFFFF0000 : val +} + +Buffer.prototype.readInt16BE = function readInt16BE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 2, this.length) + var val = this[offset + 1] | (this[offset] << 8) + return (val & 0x8000) ? val | 0xFFFF0000 : val +} + +Buffer.prototype.readInt32LE = function readInt32LE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 4, this.length) + + return (this[offset]) | + (this[offset + 1] << 8) | + (this[offset + 2] << 16) | + (this[offset + 3] << 24) +} + +Buffer.prototype.readInt32BE = function readInt32BE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 4, this.length) + + return (this[offset] << 24) | + (this[offset + 1] << 16) | + (this[offset + 2] << 8) | + (this[offset + 3]) +} + +Buffer.prototype.readFloatLE = function readFloatLE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 4, this.length) + return ieee754.read(this, offset, true, 23, 4) +} + +Buffer.prototype.readFloatBE = function readFloatBE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 4, this.length) + return ieee754.read(this, offset, false, 23, 4) +} + +Buffer.prototype.readDoubleLE = function readDoubleLE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 8, this.length) + return ieee754.read(this, offset, true, 52, 8) +} + +Buffer.prototype.readDoubleBE = function readDoubleBE (offset, noAssert) { + if (!noAssert) checkOffset(offset, 8, this.length) + return ieee754.read(this, offset, false, 52, 8) +} + +function checkInt (buf, value, offset, ext, max, min) { + if (!Buffer.isBuffer(buf)) throw new TypeError('"buffer" argument must be a Buffer instance') + if (value > max || value < min) throw new RangeError('"value" argument is out of bounds') + if (offset + ext > buf.length) throw new RangeError('Index out of range') +} + +Buffer.prototype.writeUIntLE = function writeUIntLE (value, offset, byteLength, noAssert) { + value = +value + offset = offset | 0 + byteLength = byteLength | 0 + if (!noAssert) { + var maxBytes = Math.pow(2, 8 * byteLength) - 1 + checkInt(this, value, offset, byteLength, maxBytes, 0) + } + + var mul = 1 + var i = 0 + this[offset] = value & 0xFF + while (++i < byteLength && (mul *= 0x100)) { + this[offset + i] = (value / mul) & 0xFF + } + + return offset + byteLength +} + +Buffer.prototype.writeUIntBE = function writeUIntBE (value, offset, byteLength, noAssert) { + value = +value + offset = offset | 0 + byteLength = byteLength | 0 + if (!noAssert) { + var maxBytes = Math.pow(2, 8 * byteLength) - 1 + checkInt(this, value, offset, byteLength, maxBytes, 0) + } + + var i = byteLength - 1 + var mul = 1 + this[offset + i] = value & 0xFF + while (--i >= 0 && (mul *= 0x100)) { + this[offset + i] = (value / mul) & 0xFF + } + + return offset + byteLength +} + +Buffer.prototype.writeUInt8 = function writeUInt8 (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 1, 0xff, 0) + if (!Buffer.TYPED_ARRAY_SUPPORT) value = Math.floor(value) + this[offset] = (value & 0xff) + return offset + 1 +} + +function objectWriteUInt16 (buf, value, offset, littleEndian) { + if (value < 0) value = 0xffff + value + 1 + for (var i = 0, j = Math.min(buf.length - offset, 2); i < j; ++i) { + buf[offset + i] = (value & (0xff << (8 * (littleEndian ? i : 1 - i)))) >>> + (littleEndian ? i : 1 - i) * 8 + } +} + +Buffer.prototype.writeUInt16LE = function writeUInt16LE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 2, 0xffff, 0) + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset] = (value & 0xff) + this[offset + 1] = (value >>> 8) + } else { + objectWriteUInt16(this, value, offset, true) + } + return offset + 2 +} + +Buffer.prototype.writeUInt16BE = function writeUInt16BE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 2, 0xffff, 0) + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset] = (value >>> 8) + this[offset + 1] = (value & 0xff) + } else { + objectWriteUInt16(this, value, offset, false) + } + return offset + 2 +} + +function objectWriteUInt32 (buf, value, offset, littleEndian) { + if (value < 0) value = 0xffffffff + value + 1 + for (var i = 0, j = Math.min(buf.length - offset, 4); i < j; ++i) { + buf[offset + i] = (value >>> (littleEndian ? i : 3 - i) * 8) & 0xff + } +} + +Buffer.prototype.writeUInt32LE = function writeUInt32LE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 4, 0xffffffff, 0) + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset + 3] = (value >>> 24) + this[offset + 2] = (value >>> 16) + this[offset + 1] = (value >>> 8) + this[offset] = (value & 0xff) + } else { + objectWriteUInt32(this, value, offset, true) + } + return offset + 4 +} + +Buffer.prototype.writeUInt32BE = function writeUInt32BE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 4, 0xffffffff, 0) + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset] = (value >>> 24) + this[offset + 1] = (value >>> 16) + this[offset + 2] = (value >>> 8) + this[offset + 3] = (value & 0xff) + } else { + objectWriteUInt32(this, value, offset, false) + } + return offset + 4 +} + +Buffer.prototype.writeIntLE = function writeIntLE (value, offset, byteLength, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) { + var limit = Math.pow(2, 8 * byteLength - 1) + + checkInt(this, value, offset, byteLength, limit - 1, -limit) + } + + var i = 0 + var mul = 1 + var sub = 0 + this[offset] = value & 0xFF + while (++i < byteLength && (mul *= 0x100)) { + if (value < 0 && sub === 0 && this[offset + i - 1] !== 0) { + sub = 1 + } + this[offset + i] = ((value / mul) >> 0) - sub & 0xFF + } + + return offset + byteLength +} + +Buffer.prototype.writeIntBE = function writeIntBE (value, offset, byteLength, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) { + var limit = Math.pow(2, 8 * byteLength - 1) + + checkInt(this, value, offset, byteLength, limit - 1, -limit) + } + + var i = byteLength - 1 + var mul = 1 + var sub = 0 + this[offset + i] = value & 0xFF + while (--i >= 0 && (mul *= 0x100)) { + if (value < 0 && sub === 0 && this[offset + i + 1] !== 0) { + sub = 1 + } + this[offset + i] = ((value / mul) >> 0) - sub & 0xFF + } + + return offset + byteLength +} + +Buffer.prototype.writeInt8 = function writeInt8 (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 1, 0x7f, -0x80) + if (!Buffer.TYPED_ARRAY_SUPPORT) value = Math.floor(value) + if (value < 0) value = 0xff + value + 1 + this[offset] = (value & 0xff) + return offset + 1 +} + +Buffer.prototype.writeInt16LE = function writeInt16LE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 2, 0x7fff, -0x8000) + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset] = (value & 0xff) + this[offset + 1] = (value >>> 8) + } else { + objectWriteUInt16(this, value, offset, true) + } + return offset + 2 +} + +Buffer.prototype.writeInt16BE = function writeInt16BE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 2, 0x7fff, -0x8000) + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset] = (value >>> 8) + this[offset + 1] = (value & 0xff) + } else { + objectWriteUInt16(this, value, offset, false) + } + return offset + 2 +} + +Buffer.prototype.writeInt32LE = function writeInt32LE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 4, 0x7fffffff, -0x80000000) + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset] = (value & 0xff) + this[offset + 1] = (value >>> 8) + this[offset + 2] = (value >>> 16) + this[offset + 3] = (value >>> 24) + } else { + objectWriteUInt32(this, value, offset, true) + } + return offset + 4 +} + +Buffer.prototype.writeInt32BE = function writeInt32BE (value, offset, noAssert) { + value = +value + offset = offset | 0 + if (!noAssert) checkInt(this, value, offset, 4, 0x7fffffff, -0x80000000) + if (value < 0) value = 0xffffffff + value + 1 + if (Buffer.TYPED_ARRAY_SUPPORT) { + this[offset] = (value >>> 24) + this[offset + 1] = (value >>> 16) + this[offset + 2] = (value >>> 8) + this[offset + 3] = (value & 0xff) + } else { + objectWriteUInt32(this, value, offset, false) + } + return offset + 4 +} + +function checkIEEE754 (buf, value, offset, ext, max, min) { + if (offset + ext > buf.length) throw new RangeError('Index out of range') + if (offset < 0) throw new RangeError('Index out of range') +} + +function writeFloat (buf, value, offset, littleEndian, noAssert) { + if (!noAssert) { + checkIEEE754(buf, value, offset, 4, 3.4028234663852886e+38, -3.4028234663852886e+38) + } + ieee754.write(buf, value, offset, littleEndian, 23, 4) + return offset + 4 +} + +Buffer.prototype.writeFloatLE = function writeFloatLE (value, offset, noAssert) { + return writeFloat(this, value, offset, true, noAssert) +} + +Buffer.prototype.writeFloatBE = function writeFloatBE (value, offset, noAssert) { + return writeFloat(this, value, offset, false, noAssert) +} + +function writeDouble (buf, value, offset, littleEndian, noAssert) { + if (!noAssert) { + checkIEEE754(buf, value, offset, 8, 1.7976931348623157E+308, -1.7976931348623157E+308) + } + ieee754.write(buf, value, offset, littleEndian, 52, 8) + return offset + 8 +} + +Buffer.prototype.writeDoubleLE = function writeDoubleLE (value, offset, noAssert) { + return writeDouble(this, value, offset, true, noAssert) +} + +Buffer.prototype.writeDoubleBE = function writeDoubleBE (value, offset, noAssert) { + return writeDouble(this, value, offset, false, noAssert) +} + +// copy(targetBuffer, targetStart=0, sourceStart=0, sourceEnd=buffer.length) +Buffer.prototype.copy = function copy (target, targetStart, start, end) { + if (!start) start = 0 + if (!end && end !== 0) end = this.length + if (targetStart >= target.length) targetStart = target.length + if (!targetStart) targetStart = 0 + if (end > 0 && end < start) end = start + + // Copy 0 bytes; we're done + if (end === start) return 0 + if (target.length === 0 || this.length === 0) return 0 + + // Fatal error conditions + if (targetStart < 0) { + throw new RangeError('targetStart out of bounds') + } + if (start < 0 || start >= this.length) throw new RangeError('sourceStart out of bounds') + if (end < 0) throw new RangeError('sourceEnd out of bounds') + + // Are we oob? + if (end > this.length) end = this.length + if (target.length - targetStart < end - start) { + end = target.length - targetStart + start + } + + var len = end - start + var i + + if (this === target && start < targetStart && targetStart < end) { + // descending copy from end + for (i = len - 1; i >= 0; --i) { + target[i + targetStart] = this[i + start] + } + } else if (len < 1000 || !Buffer.TYPED_ARRAY_SUPPORT) { + // ascending copy from start + for (i = 0; i < len; ++i) { + target[i + targetStart] = this[i + start] + } + } else { + Uint8Array.prototype.set.call( + target, + this.subarray(start, start + len), + targetStart + ) + } + + return len +} + +// Usage: +// buffer.fill(number[, offset[, end]]) +// buffer.fill(buffer[, offset[, end]]) +// buffer.fill(string[, offset[, end]][, encoding]) +Buffer.prototype.fill = function fill (val, start, end, encoding) { + // Handle string cases: + if (typeof val === 'string') { + if (typeof start === 'string') { + encoding = start + start = 0 + end = this.length + } else if (typeof end === 'string') { + encoding = end + end = this.length + } + if (val.length === 1) { + var code = val.charCodeAt(0) + if (code < 256) { + val = code + } + } + if (encoding !== undefined && typeof encoding !== 'string') { + throw new TypeError('encoding must be a string') + } + if (typeof encoding === 'string' && !Buffer.isEncoding(encoding)) { + throw new TypeError('Unknown encoding: ' + encoding) + } + } else if (typeof val === 'number') { + val = val & 255 + } + + // Invalid ranges are not set to a default, so can range check early. + if (start < 0 || this.length < start || this.length < end) { + throw new RangeError('Out of range index') + } + + if (end <= start) { + return this + } + + start = start >>> 0 + end = end === undefined ? this.length : end >>> 0 + + if (!val) val = 0 + + var i + if (typeof val === 'number') { + for (i = start; i < end; ++i) { + this[i] = val + } + } else { + var bytes = Buffer.isBuffer(val) + ? val + : utf8ToBytes(new Buffer(val, encoding).toString()) + var len = bytes.length + for (i = 0; i < end - start; ++i) { + this[i + start] = bytes[i % len] + } + } + + return this +} + +// HELPER FUNCTIONS +// ================ + +var INVALID_BASE64_RE = /[^+\/0-9A-Za-z-_]/g + +function base64clean (str) { + // Node strips out invalid characters like \n and \t from the string, base64-js does not + str = stringtrim(str).replace(INVALID_BASE64_RE, '') + // Node converts strings with length < 2 to '' + if (str.length < 2) return '' + // Node allows for non-padded base64 strings (missing trailing ===), base64-js does not + while (str.length % 4 !== 0) { + str = str + '=' + } + return str +} + +function stringtrim (str) { + if (str.trim) return str.trim() + return str.replace(/^\s+|\s+$/g, '') +} + +function toHex (n) { + if (n < 16) return '0' + n.toString(16) + return n.toString(16) +} + +function utf8ToBytes (string, units) { + units = units || Infinity + var codePoint + var length = string.length + var leadSurrogate = null + var bytes = [] + + for (var i = 0; i < length; ++i) { + codePoint = string.charCodeAt(i) + + // is surrogate component + if (codePoint > 0xD7FF && codePoint < 0xE000) { + // last char was a lead + if (!leadSurrogate) { + // no lead yet + if (codePoint > 0xDBFF) { + // unexpected trail + if ((units -= 3) > -1) bytes.push(0xEF, 0xBF, 0xBD) + continue + } else if (i + 1 === length) { + // unpaired lead + if ((units -= 3) > -1) bytes.push(0xEF, 0xBF, 0xBD) + continue + } + + // valid lead + leadSurrogate = codePoint + + continue + } + + // 2 leads in a row + if (codePoint < 0xDC00) { + if ((units -= 3) > -1) bytes.push(0xEF, 0xBF, 0xBD) + leadSurrogate = codePoint + continue + } + + // valid surrogate pair + codePoint = (leadSurrogate - 0xD800 << 10 | codePoint - 0xDC00) + 0x10000 + } else if (leadSurrogate) { + // valid bmp char, but last char was a lead + if ((units -= 3) > -1) bytes.push(0xEF, 0xBF, 0xBD) + } + + leadSurrogate = null + + // encode utf8 + if (codePoint < 0x80) { + if ((units -= 1) < 0) break + bytes.push(codePoint) + } else if (codePoint < 0x800) { + if ((units -= 2) < 0) break + bytes.push( + codePoint >> 0x6 | 0xC0, + codePoint & 0x3F | 0x80 + ) + } else if (codePoint < 0x10000) { + if ((units -= 3) < 0) break + bytes.push( + codePoint >> 0xC | 0xE0, + codePoint >> 0x6 & 0x3F | 0x80, + codePoint & 0x3F | 0x80 + ) + } else if (codePoint < 0x110000) { + if ((units -= 4) < 0) break + bytes.push( + codePoint >> 0x12 | 0xF0, + codePoint >> 0xC & 0x3F | 0x80, + codePoint >> 0x6 & 0x3F | 0x80, + codePoint & 0x3F | 0x80 + ) + } else { + throw new Error('Invalid code point') + } + } + + return bytes +} + +function asciiToBytes (str) { + var byteArray = [] + for (var i = 0; i < str.length; ++i) { + // Node's code seems to be doing this and not & 0x7F.. + byteArray.push(str.charCodeAt(i) & 0xFF) + } + return byteArray +} + +function utf16leToBytes (str, units) { + var c, hi, lo + var byteArray = [] + for (var i = 0; i < str.length; ++i) { + if ((units -= 2) < 0) break + + c = str.charCodeAt(i) + hi = c >> 8 + lo = c % 256 + byteArray.push(lo) + byteArray.push(hi) + } + + return byteArray +} + +function base64ToBytes (str) { + return base64.toByteArray(base64clean(str)) +} + +function blitBuffer (src, dst, offset, length) { + for (var i = 0; i < length; ++i) { + if ((i + offset >= dst.length) || (i >= src.length)) break + dst[i + offset] = src[i] + } + return i +} + +function isnan (val) { + return val !== val // eslint-disable-line no-self-compare +} + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(27))) + +/***/ }), +/* 40 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* WEBPACK VAR INJECTION */(function(setImmediate) {/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return nextFrame; }); +/** + * @license + * Copyright 2017 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const delayCallback = (() => { + if (typeof requestAnimationFrame !== 'undefined') { + return requestAnimationFrame; + } + else if (typeof setImmediate !== 'undefined') { + return setImmediate; + } + return (f) => f(); // no delays +})(); +/** + * Returns a promise that resolve when a requestAnimationFrame has completed. + * + * On Node.js this uses setImmediate instead of requestAnimationFrame. + * + * This is simply a sugar method so that users can do the following: + * `await tf.nextFrame();` + */ +/** @doc {heading: 'Performance', subheading: 'Timing'} */ +function nextFrame() { + return new Promise(resolve => delayCallback(() => resolve())); +} + +//# sourceMappingURL=browser_util.js.map +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(76).setImmediate)) + +/***/ }), +/* 41 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'Add', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'AddV2', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'AddN', + 'category': 'arithmetic', + 'inputs': [{ 'start': 0, 'end': 0, 'name': 'tensors', 'type': 'tensors' }] + }, + { + 'tfOpName': 'BiasAdd', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Sub', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'RealDiv', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Div', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'DivNoNan', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'FloorDiv', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Mul', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Maximum', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' } + ] + }, + { + 'tfOpName': 'Minimum', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' } + ] + }, + { + 'tfOpName': 'Pow', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'SquaredDifference', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Mod', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'FloorMod', + 'category': 'arithmetic', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + } +]; +//# sourceMappingURL=arithmetic.js.map + +/***/ }), +/* 42 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'Abs', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Acos', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Asin', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Atan', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Atan2', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'y', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Ceil', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'ClipByValue', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'clip_value_min', 'name': 'clipValueMin', 'type': 'number' }, + { 'tfName': 'clip_value_max', 'name': 'clipValueMax', 'type': 'number' } + ] + }, + { + 'tfOpName': 'Complex', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'real', 'type': 'tensor' }, + { 'start': 1, 'name': 'imag', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'ComplexAbs', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Cos', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Cosh', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Elu', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Exp', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Floor', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Log', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Imag', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, { + 'tfName': 'Tout', + 'name': 'outputType', + 'type': 'dtype', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'Neg', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Real', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, { + 'tfName': 'Tout', + 'name': 'outputType', + 'type': 'dtype', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'Prelu', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'alpha', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Relu', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Relu6', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, { + 'tfName': 'clipValueMin', + 'name': 'clipValueMin', + 'type': 'number', + 'defaultValue': 0 + }, + { + 'tfName': 'clipValueMax', + 'name': 'clipValueMax', + 'type': 'number', + 'defaultValue': 6 + } + ] + }, + { + 'tfOpName': 'Selu', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Sigmoid', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Sin', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Sinh', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Sqrt', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Rsqrt', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Square', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Tan', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Tanh', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Sign', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Round', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Expm1', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Log1p', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Reciprocal', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Softplus', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Asinh', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Acosh', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Atanh', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Erf', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Prod', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axes', 'type': 'number[]' }, + ], + 'attrs': [ + { + 'tfName': 'keep_dims', + 'name': 'keepDims', + 'type': 'bool', + 'notSupported': true + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'LeakyRelu', + 'category': 'basic_math', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'alpha', + 'name': 'alpha', + 'type': 'number', + 'defaultValue': 0.2 + }, + { + 'tfName': 'T', + 'name': 'dtype', + 'type': 'dtype', + 'notSupported': true + } + ] + } +]; +//# sourceMappingURL=basic_math.js.map + +/***/ }), +/* 43 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'LoopCond', + 'category': 'control', + 'inputs': [{ 'start': 0, 'name': 'pred', 'type': 'tensor' }] + }, + { + 'tfOpName': 'Switch', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'data', 'type': 'tensor' }, + { 'start': 1, 'name': 'pred', 'type': 'tensor' } + ] + }, + { + 'tfOpName': 'Merge', + 'category': 'control', + 'inputs': [{ 'start': 0, 'end': 0, 'name': 'tensors', 'type': 'tensors' }] + }, + { + 'tfOpName': 'Enter', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensor', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, + { 'tfName': 'frame_name', 'name': 'frameName', 'type': 'string' }, + { 'tfName': 'is_constant', 'name': 'isConstant', 'type': 'bool' } + ] + }, + { + 'tfOpName': 'Exit', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensor', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'NextIteration', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensor', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'TensorArrayV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'size', 'type': 'number' }, + ], + 'attrs': [ + { 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' }, + { 'tfName': 'element_shape', 'name': 'elementShape', 'type': 'shape' }, + { 'tfName': 'dynamic_size', 'name': 'dynamicSize', 'type': 'bool' }, + { 'tfName': 'clear_after_read', 'name': 'clearAfterRead', 'type': 'bool' }, + { + 'tfName': 'identical_element_shapes', + 'name': 'identicalElementShapes', + 'type': 'bool' + }, + { 'tfName': 'tensor_array_name', 'name': 'name', 'type': 'string' } + ] + }, + { + 'tfOpName': 'TensorArrayWriteV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }, + { 'start': 1, 'name': 'index', 'type': 'number' }, + { 'start': 2, 'name': 'tensor', 'type': 'tensor' }, + { 'start': 3, 'name': 'flowIn', 'type': 'number' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'TensorArrayReadV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }, + { 'start': 1, 'name': 'index', 'type': 'number' }, + { 'start': 2, 'name': 'flowIn', 'type': 'number' }, + ], + 'attrs': [{ + 'tfName': 'dtype', + 'name': 'dtype', + 'type': 'dtype', + 'notSupported': true + }] + }, + { + 'tfOpName': 'TensorArrayGatherV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }, + { 'start': 1, 'name': 'indices', 'type': 'number[]' }, + { 'start': 2, 'name': 'flowIn', 'type': 'number' }, + ], + 'attrs': [ + { 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' }, + { 'tfName': 'element_shape', 'name': 'elementShape', 'type': 'shape' } + ] + }, + { + 'tfOpName': 'TensorArrayScatterV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }, + { 'start': 1, 'name': 'indices', 'type': 'number[]' }, + { 'start': 2, 'name': 'tensor', 'type': 'tensor' }, + { 'start': 3, 'name': 'flowIn', 'type': 'number' }, + ], + 'attrs': [{ 'tfName': 'T', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'TensorArrayConcatV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }, + { 'start': 1, 'name': 'flowIn', 'type': 'number' }, + ], + 'attrs': [ + { 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' }, { + 'tfName': 'element_shape_except0', + 'name': 'elementShapeExcept0', + 'type': 'shape', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'TensorArraySplitV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }, + { 'start': 1, 'name': 'tensor', 'type': 'tensor' }, + { 'start': 2, 'name': 'lengths', 'type': 'number[]' }, + { 'start': 3, 'name': 'flowIn', 'type': 'number' }, + ], + 'attrs': [{ 'tfName': 'T', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'TensorArraySizeV3', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }, + { 'start': 1, 'name': 'flowIn', 'type': 'number' } + ] + }, + { + 'tfOpName': 'TensorArrayCloseV3', + 'category': 'control', + 'inputs': [{ 'start': 0, 'name': 'tensorArrayId', 'type': 'number' }] + }, + { + 'tfOpName': 'StatelessIf', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'cond', 'type': 'tensor' }, + { 'start': 1, 'end': 0, 'name': 'args', 'type': 'tensors' } + ], + 'attrs': [ + { 'tfName': 'then_branch', 'name': 'thenBranch', 'type': 'func' }, + { 'tfName': 'else_branch', 'name': 'elseBranch', 'type': 'func' } + ] + }, + { + 'tfOpName': 'If', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'name': 'cond', 'type': 'tensor' }, + { 'start': 1, 'end': 0, 'name': 'args', 'type': 'tensors' } + ], + 'attrs': [ + { 'tfName': 'then_branch', 'name': 'thenBranch', 'type': 'func' }, + { 'tfName': 'else_branch', 'name': 'elseBranch', 'type': 'func' } + ] + }, + { + 'tfOpName': 'StatelessWhile', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'end': 0, 'name': 'args', 'type': 'tensors' }, + ], + 'attrs': [ + { 'tfName': 'cond', 'name': 'cond', 'type': 'func' }, + { 'tfName': 'body', 'name': 'body', 'type': 'func' } + ] + }, + { + 'tfOpName': 'While', + 'category': 'control', + 'inputs': [ + { 'start': 0, 'end': 0, 'name': 'args', 'type': 'tensors' }, + ], + 'attrs': [ + { 'tfName': 'cond', 'name': 'cond', 'type': 'func' }, + { 'tfName': 'body', 'name': 'body', 'type': 'func' } + ] + } +]; +//# sourceMappingURL=control.js.map + +/***/ }), +/* 44 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'AvgPool', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + }, + { 'tfName': 'ksize', 'name': 'kernelSize', 'type': 'number[]' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'MaxPool', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + }, + { 'tfName': 'ksize', 'name': 'kernelSize', 'type': 'number[]' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'MaxPoolWithArgmax', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, + { 'tfName': 'ksize', 'name': 'kernelSize', 'type': 'number[]' }, { + 'tfName': 'include_batch_in_index', + 'name': 'includeBatchInIndex', + 'type': 'bool' + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'AvgPool3D', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + }, + { 'tfName': 'ksize', 'name': 'kernelSize', 'type': 'number[]' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'MaxPool3D', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + }, + { 'tfName': 'ksize', 'name': 'kernelSize', 'type': 'number[]' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Conv1D', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'stride', 'name': 'stride', 'type': 'number' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'defaultValue': 'NWC' + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, { + 'tfName': 'dilation', + 'name': 'dilation', + 'type': 'number', + 'defaultValue': 1 + } + ] + }, + { + 'tfOpName': 'Conv2D', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, + { 'tfName': 'useCudnnOnGpu', 'name': 'useCudnnOnGpu', 'type': 'bool' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'defaultValue': 'NHWC' + }, + { + 'tfName': 'explicit_paddings', + 'name': 'explicitPaddings', + 'type': 'number[]', + 'defaultValue': [] + }, + { 'tfName': 'dilations', 'name': 'dilations', 'type': 'number[]' } + ] + }, + { + 'tfOpName': '_FusedConv2D', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + { 'start': 2, end: 0, 'name': 'args', 'type': 'tensors' }, + ], + 'attrs': [ + { 'tfName': 'num_args', 'name': 'numArgs', 'type': 'number' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, + { + 'tfName': 'explicit_paddings', + 'name': 'explicitPaddings', + 'type': 'number[]', + 'defaultValue': [] + }, + { + 'tfName': 'use_cudnn_on_gpu', + 'name': 'useCudnnOnGpu', + 'type': 'bool', + 'defaultValue': true + }, + { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'defaultValue': 'NHWC' + }, + { + 'tfName': 'dilations', + 'name': 'dilations', + 'type': 'number[]', + 'defaultValue': [1, 1, 1, 1] + }, + { + 'tfName': 'fused_ops', + 'name': 'fusedOps', + 'type': 'string[]', + 'defaultValue': [] + }, + { + 'tfName': 'epsilon', + 'name': 'epsilon', + 'type': 'number', + 'defaultValue': 0.0001 + }, + ] + }, + { + 'tfOpName': 'Conv2DBackpropInput', + 'category': 'convolution', + 'inputs': [ + { 'start': 2, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + { 'start': 0, 'name': 'outputShape', 'type': 'number[]' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, + { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + }, + { + 'tfName': 'explicit_paddings', + 'name': 'explicitPaddings', + 'type': 'number[]', + 'defaultValue': [] + }, + ] + }, + { + 'tfOpName': 'DepthwiseConv2d', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'input', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'defaultValue': 'NHWC' + }, + { + 'tfName': 'explicit_paddings', + 'name': 'explicitPaddings', + 'type': 'number[]', + 'defaultValue': [] + }, + { 'tfName': 'dilations', 'name': 'dilations', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'DepthwiseConv2dNative', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'input', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'defaultValue': 'NHWC' + }, + { + 'tfName': 'explicit_paddings', + 'name': 'explicitPaddings', + 'type': 'number[]', + 'defaultValue': [] + }, + { 'tfName': 'dilations', 'name': 'dilations', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'FusedDepthwiseConv2dNative', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + { 'start': 2, end: 0, 'name': 'args', 'type': 'tensors' }, + ], + 'attrs': [ + { 'tfName': 'num_args', 'name': 'numArgs', 'type': 'number' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true }, + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'defaultValue': 'NHWC' + }, + { + 'tfName': 'dilations', + 'name': 'dilations', + 'type': 'number[]', + 'defaultValue': [1, 1, 1, 1] + }, + { + 'tfName': 'fused_ops', + 'name': 'fusedOps', + 'type': 'string[]', + 'defaultValue': [] + } + ] + }, + { + 'tfOpName': 'Conv3D', + 'category': 'convolution', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'filter', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'strides', 'name': 'strides', 'type': 'number[]' }, + { 'tfName': 'padding', 'name': 'pad', 'type': 'string' }, { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'defaultValue': 'NHWC' + }, + { 'tfName': 'dilations', 'name': 'dilations', 'type': 'number[]' } + ], + } +]; +//# sourceMappingURL=convolution.js.map + +/***/ }), +/* 45 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'Fill', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'shape', 'type': 'number[]' }, + { 'start': 1, 'name': 'value', 'type': 'number' }, + ], + 'attrs': [{ 'tfName': 'T', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'LinSpace', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'start', 'type': 'number' }, + { 'start': 1, 'name': 'stop', 'type': 'number' }, + { 'start': 2, 'name': 'num', 'type': 'number' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'OneHot', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'indices', 'type': 'tensor' }, + { 'start': 1, 'name': 'depth', 'type': 'number' }, + { 'start': 2, 'name': 'onValue', 'type': 'number', 'defaultValue': 1 }, + { 'start': 3, 'name': 'offValue', 'type': 'number', 'defaultValue': 0 }, + ], + 'attrs': [ + { + 'tfName': 'axis', + 'name': 'axis', + 'type': 'number', + 'notSupported': true + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Ones', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'shape', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'T', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'OnesLike', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [{ 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'RandomUniform', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'shape', 'type': 'number[]' }, + ], + 'attrs': [ + { + 'tfName': 'minval', + 'name': 'minval', + 'type': 'number', + 'defaultValue': 0 + }, + { + 'tfName': 'maxval', + 'name': 'maxval', + 'type': 'number', + 'defaultValue': 1 + }, + { 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' }, + { 'tfName': 'seed', 'name': 'seed', 'type': 'number', 'defaultValue': 0 }, { + 'tfName': 'seed2', + 'name': 'seed2', + 'type': 'number', + 'defaultValue': 0, + 'notSupported': true + }, + { 'tfName': 'T', 'name': 'T', 'type': 'number', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Range', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'start', 'type': 'number' }, + { 'start': 1, 'name': 'stop', 'type': 'number' }, + { 'start': 2, 'name': 'step', 'type': 'number', 'defaultValue': 0 }, + ], + 'attrs': [{ 'tfName': 'Tidx', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'TruncatedNormal', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'shape', 'type': 'number[]' }, + ], + 'attrs': [ + { + 'tfName': 'means', + 'name': 'mean', + 'type': 'number', + 'defaultValue': 0.0 + }, + { + 'tfName': 'stddev', + 'name': 'stdDev', + 'type': 'number', + 'defaultValue': 1.0 + }, + { 'tfName': 'seed', 'name': 'seed', 'type': 'number' }, { + 'tfName': 'seed2', + 'name': 'seed2', + 'type': 'number', + 'defaultValue': 0, + 'notSupported': true + }, + { 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' }, + { 'tfName': 'T', 'name': 'T', 'type': 'number', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Zeros', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'shape', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'T', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'ZerosLike', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [{ 'tfName': 'T', 'name': 'dtype', 'type': 'dtype' }] + }, + { + 'tfOpName': 'Multinomial', + 'category': 'creation', + 'inputs': [ + { 'start': 0, 'name': 'logits', 'type': 'tensor' }, + { 'start': 1, 'name': 'numSamples', 'type': 'number' }, + ], + 'attrs': [ + { 'tfName': 'seed', 'name': 'seed', 'type': 'number' }, + { 'tfName': 'seed2', 'name': 'seed2', 'type': 'number' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype' }, + { 'tfName': 'output_dtype', 'name': 'output_dtype', 'type': 'dtype' } + ] + } +]; +//# sourceMappingURL=creation.js.map + +/***/ }), +/* 46 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'NonMaxSuppressionV2', + 'category': 'dynamic', + 'inputs': [ + { 'start': 0, 'name': 'boxes', 'type': 'tensor' }, + { 'start': 1, 'name': 'scores', 'type': 'tensor' }, + { 'start': 2, 'name': 'maxOutputSize', 'type': 'number' }, + { 'start': 3, 'name': 'iouThreshold', 'type': 'number' } + ] + }, + { + 'tfOpName': 'NonMaxSuppressionV3', + 'category': 'dynamic', + 'inputs': [ + { 'start': 0, 'name': 'boxes', 'type': 'tensor' }, + { 'start': 1, 'name': 'scores', 'type': 'tensor' }, + { 'start': 2, 'name': 'maxOutputSize', 'type': 'number' }, + { 'start': 3, 'name': 'iouThreshold', 'type': 'number' }, + { 'start': 4, 'name': 'scoreThreshold', 'type': 'number' } + ] + }, + { + 'tfOpName': 'NonMaxSuppressionV5', + 'category': 'dynamic', + 'inputs': [ + { 'start': 0, 'name': 'boxes', 'type': 'tensor' }, + { 'start': 1, 'name': 'scores', 'type': 'tensor' }, + { 'start': 2, 'name': 'maxOutputSize', 'type': 'number' }, + { 'start': 3, 'name': 'iouThreshold', 'type': 'number' }, + { 'start': 4, 'name': 'scoreThreshold', 'type': 'number' }, + { 'start': 5, 'name': 'softNmsSigma', 'type': 'number' } + ] + }, + { + 'tfOpName': 'Where', + 'category': 'dynamic', + 'inputs': [ + { 'start': 0, 'name': 'condition', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'ListDiff', + 'category': 'dynamic', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'y', 'type': 'tensor' }, + ], + 'attrs': [{ + 'tfName': 'T', + 'name': 'dtype', + 'type': 'dtype', + 'notSupported': true + }] + } +]; +//# sourceMappingURL=dynamic.js.map + +/***/ }), +/* 47 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [{ + 'tfOpName': 'TopKV2', + 'category': 'evaluation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'k', 'type': 'number' }, + ], + 'attrs': [{ 'tfName': 'sorted', 'name': 'sorted', 'type': 'bool' }] + }]; +//# sourceMappingURL=evaluation.js.map + +/***/ }), +/* 48 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'PlaceholderWithDefault', + 'category': 'graph', + 'inputs': [ + { 'start': 0, 'name': 'default', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'shape', 'name': 'shape', 'type': 'shape' }, + { 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' } + ] + }, + { + 'tfOpName': 'Placeholder', + 'category': 'graph', + 'attrs': [ + { 'tfName': 'shape', 'name': 'shape', 'type': 'shape' }, + { 'tfName': 'dtype', 'name': 'dtype', 'type': 'dtype' } + ] + }, + { 'tfOpName': 'Const', 'category': 'graph' }, { + 'tfOpName': 'Identity', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'IdentityN', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'end': 0, 'name': 'x', 'type': 'tensors' }] + }, + { + 'tfOpName': 'Snapshot', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'Rank', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'Size', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'Shape', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'ShapeN', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'end': 0, 'name': 'x', 'type': 'tensors' }] + }, + { + 'tfOpName': 'Print', + 'category': 'graph', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'data', 'type': 'tensors' }, + ], + 'attrs': [ + { 'tfName': 'message', 'name': 'message', 'type': 'string' }, { + 'tfName': 'first_n', + 'name': 'firstN', + 'type': 'number', + 'notSupported': true + }, + { + 'tfName': 'summarize', + 'name': 'summarize', + 'type': 'number', + 'defaultValue': 3 + } + ] + }, + { 'tfOpName': 'NoOp', 'category': 'graph', 'inputs': [] }, { + 'tfOpName': 'StopGradient', + 'category': 'graph', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'FakeQuantWithMinMaxVars', + 'category': 'graph', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'min', 'name': 'min', 'type': 'number' }, + { 'tfName': 'max', 'name': 'max', 'type': 'number' } + ] + } +]; +//# sourceMappingURL=graph.js.map + +/***/ }), +/* 49 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'ResizeBilinear', + 'category': 'image', + 'inputs': [ + { 'start': 0, 'name': 'images', 'type': 'tensor' }, + { 'start': 1, 'name': 'size', 'type': 'number[]' }, + ], + 'attrs': [ + { 'tfName': 'align_corners', 'name': 'alignCorners', 'type': 'bool' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'ResizeNearestNeighbor', + 'category': 'image', + 'inputs': [ + { 'start': 0, 'name': 'images', 'type': 'tensor' }, + { 'start': 1, 'name': 'size', 'type': 'number[]' }, + ], + 'attrs': [ + { 'tfName': 'align_corners', 'name': 'alignCorners', 'type': 'bool' }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'CropAndResize', + 'category': 'image', + 'inputs': [ + { 'start': 0, 'name': 'image', 'type': 'tensor' }, + { 'start': 1, 'name': 'boxes', 'type': 'tensor' }, + { 'start': 2, 'name': 'boxInd', 'type': 'tensor' }, + { 'start': 3, 'name': 'cropSize', 'type': 'number[]' }, + ], + 'attrs': [ + { 'tfName': 'method', 'name': 'method', 'type': 'string' }, { + 'tfName': 'extrapolation_value', + 'name': 'extrapolationValue', + 'type': 'number' + } + ] + } +]; +//# sourceMappingURL=image.js.map + +/***/ }), +/* 50 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'Equal', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'NotEqual', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Greater', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'GreaterEqual', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Less', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'LessEqual', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'LogicalAnd', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'LogicalNot', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'LogicalOr', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Select', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'condition', 'type': 'tensor' }, + { 'start': 1, 'name': 'a', 'type': 'tensor' }, + { 'start': 2, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'SelectV2', + 'category': 'logical', + 'inputs': [ + { 'start': 0, 'name': 'condition', 'type': 'tensor' }, + { 'start': 1, 'name': 'a', 'type': 'tensor' }, + { 'start': 2, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [{ + 'tfName': 'T', + 'name': 'dtype', + 'type': 'dtype', + 'notSupported': true + }] + } +]; +//# sourceMappingURL=logical.js.map + +/***/ }), +/* 51 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': '_FusedMatMul', + 'category': 'matrices', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + { 'start': 2, end: 0, 'name': 'args', 'type': 'tensors' }, + ], + 'attrs': [ + { 'tfName': 'num_args', 'name': 'numArgs', 'type': 'number' }, { + 'tfName': 'fused_ops', + 'name': 'fusedOps', + 'type': 'string[]', + 'defaultValue': [] + }, + { + 'tfName': 'epsilon', + 'name': 'epsilon', + 'type': 'number', + 'defaultValue': 0.0001 + }, + { + 'tfName': 'transpose_a', + 'name': 'transposeA', + 'type': 'bool', + 'defaultValue': false + }, + { + 'tfName': 'transpose_b', + 'name': 'transposeB', + 'type': 'bool', + 'defaultValue': false + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'MatMul', + 'category': 'matrices', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'transpose_a', + 'name': 'transposeA', + 'type': 'bool', + 'defaultValue': false + }, + { + 'tfName': 'transpose_b', + 'name': 'transposeB', + 'type': 'bool', + 'defaultValue': false + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'BatchMatMul', + 'category': 'matrices', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'adj_x', + 'name': 'transposeA', + 'type': 'bool', + 'defaultValue': false + }, + { + 'tfName': 'adj_y', + 'name': 'transposeB', + 'type': 'bool', + 'defaultValue': false + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'BatchMatMulV2', + 'category': 'matrices', + 'inputs': [ + { 'start': 0, 'name': 'a', 'type': 'tensor' }, + { 'start': 1, 'name': 'b', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'adj_x', + 'name': 'transposeA', + 'type': 'bool', + 'defaultValue': false + }, + { + 'tfName': 'adj_y', + 'name': 'transposeB', + 'type': 'bool', + 'defaultValue': false + }, + { 'tfName': 'T', 'name': 'dtype', 'type': 'dtype', 'notSupported': true } + ] + }, + { + 'tfOpName': 'Transpose', + 'category': 'matrices', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'perm', 'type': 'number[]' }, + ], + 'attrs': [{ + 'tfName': 'T', + 'name': 'dtype', + 'type': 'dtype', + 'notSupported': true + }] + } +]; +//# sourceMappingURL=matrices.js.map + +/***/ }), +/* 52 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'FusedBatchNorm', + 'category': 'normalization', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'scale', 'type': 'tensor' }, + { 'start': 2, 'name': 'offset', 'type': 'tensor' }, + { 'start': 3, 'name': 'mean', 'type': 'tensor' }, + { 'start': 4, 'name': 'variance', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'epsilon', + 'name': 'epsilon', + 'type': 'number', + 'defaultValue': 0.001 + }, + { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'FusedBatchNormV2', + 'category': 'normalization', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'scale', 'type': 'tensor' }, + { 'start': 2, 'name': 'offset', 'type': 'tensor' }, + { 'start': 3, 'name': 'mean', 'type': 'tensor' }, + { 'start': 4, 'name': 'variance', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'epsilon', + 'name': 'epsilon', + 'type': 'number', + 'defaultValue': 0.001 + }, + { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'FusedBatchNormV3', + 'category': 'normalization', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'scale', 'type': 'tensor' }, + { 'start': 2, 'name': 'offset', 'type': 'tensor' }, + { 'start': 3, 'name': 'mean', 'type': 'tensor' }, + { 'start': 4, 'name': 'variance', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'epsilon', + 'name': 'epsilon', + 'type': 'number', + 'defaultValue': 0.001 + }, + { + 'tfName': 'data_format', + 'name': 'dataFormat', + 'type': 'string', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'LRN', + 'category': 'normalization', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'depth_radius', + 'name': 'radius', + 'type': 'number', + 'defaultValue': 5 + }, + { 'tfName': 'bias', 'name': 'bias', 'type': 'number', 'defaultValue': 1.0 }, + { + 'tfName': 'alpha', + 'name': 'alpha', + 'type': 'number', + 'defaultValue': 1.0 + }, + { + 'tfName': 'beta', + 'name': 'beta', + 'type': 'number', + 'defaultValue': 0.5 + } + ] + }, + { + 'tfOpName': 'Softmax', + 'category': 'normalization', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'LogSoftmax', + 'category': 'normalization', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'SparseToDense', + 'category': 'normalization', + 'inputs': [ + { 'start': 0, 'name': 'sparseIndices', 'type': 'tensor' }, + { 'start': 1, 'name': 'outputShape', 'type': 'number[]' }, + { 'start': 2, 'name': 'sparseValues', 'type': 'tensor' }, + { 'start': 3, 'name': 'defaultValue', 'type': 'tensor' }, + ], + 'attrs': [{ + 'tfName': 'validate_indices', + 'name': 'validateIndices', + 'type': 'bool', + 'defaultValue': true, + 'notSupported': true + }] + } +]; +//# sourceMappingURL=normalization.js.map + +/***/ }), +/* 53 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'Max', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'keep_dims', 'name': 'keepDims', 'type': 'bool' }] + }, + { + 'tfOpName': 'Mean', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'keep_dims', 'name': 'keepDims', 'type': 'bool' }] + }, + { + 'tfOpName': 'Min', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'keep_dims', 'name': 'keepDims', 'type': 'bool' }] + }, + { + 'tfOpName': 'Sum', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'keep_dims', 'name': 'keepDims', 'type': 'bool' }] + }, + { + 'tfOpName': 'All', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'keep_dims', 'name': 'keepDims', 'type': 'bool' }] + }, + { + 'tfOpName': 'Any', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'keep_dims', 'name': 'keepDims', 'type': 'bool' }] + }, + { + 'tfOpName': 'ArgMax', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number' } + ] + }, + { + 'tfOpName': 'ArgMin', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number' } + ] + }, + { + 'tfOpName': 'Prod', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' }, + ], + 'attrs': [{ 'tfName': 'keep_dims', 'name': 'keepDims', 'type': 'bool' }] + }, + { + 'tfOpName': 'Cumsum', + 'category': 'reduction', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number' }, + ], + 'attrs': [ + { 'tfName': 'exclusive', 'name': 'exclusive', 'type': 'bool' }, + { 'tfName': 'reverse', 'name': 'reverse', 'type': 'bool' } + ] + } +]; +//# sourceMappingURL=reduction.js.map + +/***/ }), +/* 54 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'ConcatV2', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'end': -1, 'name': 'tensors', 'type': 'tensors' }, + { 'start': -1, 'name': 'axis', 'type': 'number' } + ], + 'attrs': [{ 'tfName': 'N', 'name': 'n', 'type': 'number', 'defaultValue': 2 }] + }, + { + 'tfOpName': 'Concat', + 'category': 'slice_join', + 'inputs': [ + { 'start': 1, 'end': 0, 'name': 'tensors', 'type': 'tensors' }, + { 'start': 0, 'name': 'axis', 'type': 'number' } + ], + 'attrs': [{ 'tfName': 'N', 'name': 'n', 'type': 'number', 'defaultValue': 2 }] + }, + { + 'tfOpName': 'GatherV2', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'indices', 'type': 'tensor' }, + { 'start': 2, 'name': 'axis', 'type': 'number', 'defaultValue': 0 } + ] + }, + { + 'tfOpName': 'Gather', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'indices', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'axis', 'name': 'axis', 'type': 'number', 'defaultValue': 0 }, { + 'tfName': 'validate_indices', + 'name': 'validateIndices', + 'type': 'bool', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'Reverse', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'dims', 'type': 'bool', 'notSupported': true } + ] + }, + { + 'tfOpName': 'ReverseV2', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'Slice', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'begin', 'type': 'number[]' }, + { 'start': 2, 'name': 'size', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'StridedSlice', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'begin', 'type': 'number[]' }, + { 'start': 2, 'name': 'end', 'type': 'number[]' }, + { 'start': 3, 'name': 'strides', 'type': 'number[]' }, + ], + 'attrs': [ + { + 'tfName': 'begin_mask', + 'name': 'beginMask', + 'type': 'number', + 'defaultValue': 0 + }, + { + 'tfName': 'end_mask', + 'name': 'endMask', + 'type': 'number', + 'defaultValue': 0 + }, + { + 'tfName': 'new_axis_mask', + 'name': 'newAxisMask', + 'type': 'number', + 'defaultValue': 0 + }, + { + 'tfName': 'ellipsis_mask', + 'name': 'ellipsisMask', + 'type': 'number', + 'defaultValue': 0 + }, + { + 'tfName': 'shrink_axis_mask', + 'name': 'shrinkAxisMask', + 'type': 'number', + 'defaultValue': 0 + } + ] + }, + { + 'tfOpName': 'Pack', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'end': 0, 'name': 'tensors', 'type': 'tensors' }, + ], + 'attrs': [ + { 'tfName': 'axis', 'name': 'axis', 'type': 'number', 'defaultValue': 0 } + ] + }, + { + 'tfOpName': 'Unpack', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'tensor', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'axis', 'name': 'axis', 'type': 'number', 'defaultValue': 0 }, { + 'tfName': 'num', + 'name': 'num', + 'type': 'number', + 'defaultValue': 0, + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'Tile', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'reps', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'Split', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'axis', 'type': 'number', 'defaultValue': 0 }, + { 'start': 1, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [{ + 'tfName': 'num_split', + 'name': 'numOrSizeSplits', + 'type': 'number', + 'defaultValue': 1 + }] + }, + { + 'tfOpName': 'SplitV', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'numOrSizeSplits', 'type': 'number[]' }, + { 'start': 2, 'name': 'axis', 'type': 'number', 'defaultValue': 0 } + ] + }, + { + 'tfOpName': 'ScatterNd', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'indices', 'type': 'tensor' }, + { 'start': 1, 'name': 'values', 'type': 'tensor' }, + { 'start': 2, 'name': 'shape', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'GatherNd', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'indices', 'type': 'tensor' } + ] + }, + { + 'tfOpName': 'SparseToDense', + 'category': 'slice_join', + 'inputs': [ + { 'start': 0, 'name': 'sparseIndices', 'type': 'tensor' }, + { 'start': 1, 'name': 'outputShape', 'type': 'number[]' }, + { 'start': 2, 'name': 'sparseValues', 'type': 'tensor' }, + { 'start': 3, 'name': 'defaultValue', 'type': 'tensor' }, + ], + 'attrs': [{ + 'tfName': 'validate_indices', + 'name': 'validateIndices', + 'type': 'bool', + 'defaultValue': false, + 'notSupported': true + }] + } +]; +//# sourceMappingURL=slice_join.js.map + +/***/ }), +/* 55 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'FFT', + 'category': 'spectral', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'IFFT', + 'category': 'spectral', + 'inputs': [{ 'start': 0, 'name': 'x', 'type': 'tensor' }] + }, + { + 'tfOpName': 'RFFT', + 'category': 'spectral', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, { + 'start': 1, + 'name': 'fft_length', + 'type': 'number', + 'notSupported': true + } + ] + }, + { + 'tfOpName': 'IRFFT', + 'category': 'spectral', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, { + 'start': 1, + 'name': 'fft_length', + 'type': 'number', + 'notSupported': true + } + ] + } +]; +//# sourceMappingURL=spectral.js.map + +/***/ }), +/* 56 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +__webpack_require__.r(__webpack_exports__); +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "json", function() { return json; }); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +const json = [ + { + 'tfOpName': 'Cast', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { + 'tfName': 'SrcT', + 'name': 'sdtype', + 'type': 'dtype', + 'notSupported': true + }, + { 'tfName': 'DstT', 'name': 'dtype', 'type': 'dtype' } + ] + }, + { + 'tfOpName': 'ExpandDims', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'axis', 'type': 'number' } + ] + }, + { + 'tfOpName': 'Pad', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'padding', 'type': 'number[]' }, + ], + 'attrs': [{ + 'tfName': 'constant_value', + 'name': 'constantValue', + 'type': 'number', + 'defaultValue': 0 + }] + }, + { + 'tfOpName': 'PadV2', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'padding', 'type': 'number[]' }, { + 'start': 2, + 'name': 'constantValue', + 'type': 'number', + 'defaultValue': 0 + } + ] + }, + { + 'tfOpName': 'Reshape', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'shape', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'Squeeze', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [{ + 'tfName': 'axis', + 'tfDeprecatedName': 'squeeze_dims', + 'name': 'axis', + 'type': 'number[]' + }] + }, + { + 'tfOpName': 'SpaceToBatchND', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'blockShape', 'type': 'number[]' }, + { 'start': 2, 'name': 'paddings', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'BatchToSpaceND', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'blockShape', 'type': 'number[]' }, + { 'start': 2, 'name': 'crops', 'type': 'number[]' } + ] + }, + { + 'tfOpName': 'DepthToSpace', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + ], + 'attrs': [ + { 'tfName': 'block_size', 'name': 'blockSize', 'type': 'number' }, + { 'tfName': 'data_format', 'name': 'dataFormat', 'type': 'string' } + ] + }, + { + 'tfOpName': 'BroadcastTo', + 'category': 'transformation', + 'inputs': [ + { 'start': 0, 'name': 'x', 'type': 'tensor' }, + { 'start': 1, 'name': 'shape', 'type': 'number[]' }, + ], + 'attrs': [] + } +]; +//# sourceMappingURL=transformation.js.map + +/***/ }), +/* 57 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* WEBPACK VAR INJECTION */(function(Buffer) {/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return ByteChunkIterator; }); +/* harmony import */ var _tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(0); +/* harmony import */ var _lazy_iterator__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(14); +/* harmony import */ var _string_iterator__WEBPACK_IMPORTED_MODULE_2__ = __webpack_require__(58); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ + + + +class ByteChunkIterator extends _lazy_iterator__WEBPACK_IMPORTED_MODULE_1__[/* LazyIterator */ "a"] { + /** + * Decode a stream of UTF8-encoded byte arrays to a stream of strings. + * + * The byte arrays producetd from the ByteChunkIterator on which this is + * called will be interpreted as concatenated. No assumptions are made about + * the boundaries of the incoming chunks, so a multi-byte UTF8 encoding of a + * character may span the boundary between chunks. This naturally happens, + * for instance, when reading fixed-size byte arrays from a file. + */ + decodeUTF8() { + return new Utf8Iterator(this); + } +} +// ============================================================================ +// The following private classes serve to implement the chainable methods +// on ByteChunkIterator. Unfortunately they can't be placed in separate files, +// due to resulting trouble with circular imports. +// ============================================================================ +// We wanted multiple inheritance, e.g. +// class Utf8Iterator extends QueueIterator, StringIterator +// but the TypeScript mixin approach is a bit hacky, so we take this adapter +// approach instead. +class Utf8Iterator extends _string_iterator__WEBPACK_IMPORTED_MODULE_2__[/* StringIterator */ "a"] { + constructor(upstream) { + super(); + this.upstream = upstream; + this.impl = new Utf8IteratorImpl(upstream); + } + summary() { + return this.impl.summary(); + } + async next() { + return this.impl.next(); + } +} +/** + * Decode a stream of UTF8-encoded byte arrays to a stream of strings. + * + * This is tricky because the incoming byte array boundaries may disrupt a + * multi-byte UTF8 character. Thus any incomplete character data at the end of + * a chunk must be carried over and prepended to the next chunk before + * decoding. Luckily with native decoder, TextDecoder in browser and + * string_decoder in node, byte array boundaries are handled automatically. + * + * In the context of an input pipeline for machine learning, UTF8 decoding is + * needed to parse text files containing training examples or prediction + * requests (e.g., formatted as CSV or JSON). We cannot use the built-in + * decoding provided by FileReader.readAsText() because here we are in a + * streaming context, which FileReader does not support. + * + * @param upstream A `LazyIterator` of `Uint8Arrays` containing UTF8-encoded + * text, which should be interpreted as concatenated. No assumptions are + * made about the boundaries of the incoming chunks, so a multi-byte UTF8 + * encoding of a character may span the boundary between chunks. This + * naturally happens, for instance, when reading fixed-size byte arrays from a + * file. + */ +class Utf8IteratorImpl extends _lazy_iterator__WEBPACK_IMPORTED_MODULE_1__[/* OneToManyIterator */ "b"] { + constructor(upstream) { + super(); + this.upstream = upstream; + if (Object(_tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["env"])().get('IS_BROWSER')) { + this.decoder = new TextDecoder('utf-8'); + } + else { + // tslint:disable-next-line:no-require-imports + const { StringDecoder } = __webpack_require__(78); + this.decoder = new StringDecoder('utf8'); + } + } + summary() { + return `${this.upstream.summary()} -> Utf8`; + } + async pump() { + const chunkResult = await this.upstream.next(); + let chunk; + if (chunkResult.done) { + return false; + } + else { + chunk = chunkResult.value; + } + let text; + if (Object(_tensorflow_tfjs_core__WEBPACK_IMPORTED_MODULE_0__["env"])().get('IS_BROWSER')) { + text = this.decoder.decode(chunk, { stream: true }); + } + else { + text = this.decoder.write(Buffer.from(chunk.buffer)); + } + this.outputQueue.push(text); + return true; + } +} +//# sourceMappingURL=byte_chunk_iterator.js.map +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(39).Buffer)) + +/***/ }), +/* 58 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "a", function() { return StringIterator; }); +/* harmony import */ var _lazy_iterator__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(14); +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + * ============================================================================= + */ + +class StringIterator extends _lazy_iterator__WEBPACK_IMPORTED_MODULE_0__[/* LazyIterator */ "a"] { + /** + * Splits a string stream on a given separator. + * + * It is assumed that the incoming chunk boundaries have no semantic meaning, + * so conceptually the incoming stream is treated simply as the concatenation + * of its elements. + * + * The outgoing stream provides chunks corresponding to the results of the + * standard string split() operation (even if such a chunk spanned incoming + * chunks). The separators are not included. + * + * A typical usage is to split a text file (represented as a stream with + * arbitrary chunk boundaries) into lines. + * + * @param upstream A readable stream of strings that can be treated as + * concatenated. + * @param separator A character to split on. + */ + split(separator) { + return new SplitIterator(this, separator); + } +} +// ============================================================================ +// The following private classes serve to implement the chainable methods +// on StringIterator. Unfortunately they can't be placed in separate files, due +// to resulting trouble with circular imports. +// ============================================================================ +// We wanted multiple inheritance, e.g. +// class SplitIterator extends QueueIterator, StringIterator +// but the TypeScript mixin approach is a bit hacky, so we take this adapter +// approach instead. +class SplitIterator extends StringIterator { + constructor(upstream, separator) { + super(); + this.upstream = upstream; + this.impl = new SplitIteratorImpl(upstream, separator); + } + summary() { + return this.impl.summary(); + } + async next() { + return this.impl.next(); + } +} +class SplitIteratorImpl extends _lazy_iterator__WEBPACK_IMPORTED_MODULE_0__[/* OneToManyIterator */ "b"] { + constructor(upstream, separator) { + super(); + this.upstream = upstream; + this.separator = separator; + // A partial string at the end of an upstream chunk + this.carryover = ''; + } + summary() { + return `${this.upstream.summary()} -> Split('${this.separator}')`; + } + async pump() { + const chunkResult = await this.upstream.next(); + if (chunkResult.done) { + if (this.carryover === '') { + return false; + } + // Pretend that the pump succeeded in order to emit the small last batch. + // The next pump() call will actually fail. + this.outputQueue.push(this.carryover); + this.carryover = ''; + return true; + } + const lines = chunkResult.value.split(this.separator); + // Note the behavior: " ab ".split(' ') === ['', 'ab', ''] + // Thus the carryover may be '' if the separator falls on a chunk + // boundary; this produces the correct result. + lines[0] = this.carryover + lines[0]; + for (const line of lines.slice(0, -1)) { + this.outputQueue.push(line); + } + this.carryover = lines[lines.length - 1]; + return true; + } +} +//# sourceMappingURL=string_iterator.js.map + +/***/ }), +/* 59 */ +/***/ (function(module, exports, __webpack_require__) { + +"use strict"; + +Object.defineProperty(exports, "__esModule", { value: true }); +const blazeface = __webpack_require__(81); +const tfconv = __webpack_require__(38); +const tf = __webpack_require__(0); +const keypoints_1 = __webpack_require__(82); +const pipeline_1 = __webpack_require__(83); +const uv_coords_1 = __webpack_require__(85); +const FACEMESH_GRAPHMODEL_PATH = 'https://tfhub.dev/mediapipe/tfjs-model/facemesh/1/default/1'; +const MESH_MODEL_INPUT_WIDTH = 192; +const MESH_MODEL_INPUT_HEIGHT = 192; +async function load({ maxContinuousChecks = 5, detectionConfidence = 0.9, maxFaces = 10, iouThreshold = 0.3, scoreThreshold = 0.75 } = {}) { + const [blazeFace, blazeMeshModel] = await Promise.all([ + loadDetectorModel(maxFaces, iouThreshold, scoreThreshold), loadMeshModel() + ]); + const faceMesh = new FaceMesh(blazeFace, blazeMeshModel, maxContinuousChecks, detectionConfidence, maxFaces); + return faceMesh; +} +exports.load = load; +async function loadDetectorModel(maxFaces, iouThreshold, scoreThreshold) { + return blazeface.load({ maxFaces, iouThreshold, scoreThreshold }); +} +async function loadMeshModel() { + return tfconv.loadGraphModel(FACEMESH_GRAPHMODEL_PATH, { fromTFHub: true }); +} +function getInputTensorDimensions(input) { + return input instanceof tf.Tensor ? [input.shape[0], input.shape[1]] : + [input.height, input.width]; +} +function flipFaceHorizontal(face, imageWidth) { + if (face.mesh instanceof tf.Tensor) { + const [topLeft, bottomRight, mesh, scaledMesh] = tf.tidy(() => { + const subtractBasis = tf.tensor1d([imageWidth - 1, 0, 0]); + const multiplyBasis = tf.tensor1d([1, -1, 1]); + return tf.tidy(() => { + return [ + tf.concat([ + tf.sub(imageWidth - 1, face.boundingBox.topLeft.slice(0, 1)), + face.boundingBox.topLeft.slice(1, 1) + ]), + tf.concat([ + tf.sub(imageWidth - 1, face.boundingBox.bottomRight.slice(0, 1)), + face.boundingBox.bottomRight.slice(1, 1) + ]), + tf.sub(subtractBasis, face.mesh).mul(multiplyBasis), + tf.sub(subtractBasis, face.scaledMesh).mul(multiplyBasis) + ]; + }); + }); + return Object.assign({}, face, { boundingBox: { topLeft, bottomRight }, mesh, scaledMesh }); + } + return Object.assign({}, face, { + boundingBox: { + topLeft: [ + imageWidth - 1 - face.boundingBox.topLeft[0], + face.boundingBox.topLeft[1] + ], + bottomRight: [ + imageWidth - 1 - face.boundingBox.bottomRight[0], + face.boundingBox.bottomRight[1] + ] + }, + mesh: (face.mesh).map(coord => { + const flippedCoord = coord.slice(0); + flippedCoord[0] = imageWidth - 1 - coord[0]; + return flippedCoord; + }), + scaledMesh: face.scaledMesh.map(coord => { + const flippedCoord = coord.slice(0); + flippedCoord[0] = imageWidth - 1 - coord[0]; + return flippedCoord; + }) + }); +} +class FaceMesh { + constructor(blazeFace, blazeMeshModel, maxContinuousChecks, detectionConfidence, maxFaces) { + this.pipeline = new pipeline_1.Pipeline(blazeFace, blazeMeshModel, MESH_MODEL_INPUT_WIDTH, MESH_MODEL_INPUT_HEIGHT, maxContinuousChecks, maxFaces); + this.detectionConfidence = detectionConfidence; + } + static getAnnotations() { + return keypoints_1.MESH_ANNOTATIONS; + } + static getUVCoords() { + return uv_coords_1.UV_COORDS; + } + async estimateFaces(input, returnTensors = false, flipHorizontal = false) { + const [, width] = getInputTensorDimensions(input); + const image = tf.tidy(() => { + if (!(input instanceof tf.Tensor)) { + input = tf.browser.fromPixels(input); + } + return input.toFloat().expandDims(0); + }); + const savedWebglPackDepthwiseConvFlag = tf.env().get('WEBGL_PACK_DEPTHWISECONV'); + tf.env().set('WEBGL_PACK_DEPTHWISECONV', true); + const predictions = await this.pipeline.predict(image); + tf.env().set('WEBGL_PACK_DEPTHWISECONV', savedWebglPackDepthwiseConvFlag); + image.dispose(); + if (predictions != null && predictions.length > 0) { + return Promise.all(predictions.map(async (prediction, i) => { + const { coords, scaledCoords, box, flag } = prediction; + let tensorsToRead = [flag]; + if (!returnTensors) { + tensorsToRead = tensorsToRead.concat([coords, scaledCoords, box.startPoint, box.endPoint]); + } + const tensorValues = await Promise.all(tensorsToRead.map(async (d) => d.array())); + const flagValue = tensorValues[0]; + flag.dispose(); + if (flagValue < this.detectionConfidence) { + this.pipeline.clearRegionOfInterest(i); + } + if (returnTensors) { + const annotatedPrediction = { + faceInViewConfidence: flagValue, + mesh: coords, + scaledMesh: scaledCoords, + boundingBox: { + topLeft: box.startPoint.squeeze(), + bottomRight: box.endPoint.squeeze() + } + }; + if (flipHorizontal) { + return flipFaceHorizontal(annotatedPrediction, width); + } + return annotatedPrediction; + } + const [coordsArr, coordsArrScaled, topLeft, bottomRight] = tensorValues.slice(1); + scaledCoords.dispose(); + coords.dispose(); + let annotatedPrediction = { + faceInViewConfidence: flagValue, + boundingBox: { topLeft, bottomRight }, + mesh: coordsArr, + scaledMesh: coordsArrScaled + }; + if (flipHorizontal) { + annotatedPrediction = + flipFaceHorizontal(annotatedPrediction, width); + } + const annotations = {}; + for (const key in keypoints_1.MESH_ANNOTATIONS) { + annotations[key] = keypoints_1.MESH_ANNOTATIONS[key].map(index => annotatedPrediction.scaledMesh[index]); + } + annotatedPrediction['annotations'] = annotations; + return annotatedPrediction; + })); + } + return []; + } +} +exports.FaceMesh = FaceMesh; +//# sourceMappingURL=index.js.map + +/***/ }), +/* 60 */ +/***/ (function(module, exports, __webpack_require__) { + +"use strict"; +/** + * @license + * Copyright 2020 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +Object.defineProperty(exports, '__esModule', { value: true }); + +var tfjsCore = __webpack_require__(0); +var tfjsLayers = __webpack_require__(87); +var tfjsConverter = __webpack_require__(38); +var tfjsData = __webpack_require__(88); +var tfjsBackendCpu = __webpack_require__(89); +var tfjsBackendWebgl = __webpack_require__(86); + +/** @license See the LICENSE file. */ +// This code is auto-generated, do not modify this file! +var version = '2.0.1'; + +/** + * @license + * Copyright 2018 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ +var version$1 = { + 'tfjs-core': tfjsCore.version_core, + 'tfjs-backend-cpu': tfjsBackendCpu.version_cpu, + 'tfjs-backend-webgl': tfjsBackendWebgl.version_webgl, + 'tfjs-data': tfjsData.version_data, + 'tfjs-layers': tfjsLayers.version_layers, + 'tfjs-converter': tfjsConverter.version_converter, + 'tfjs': version +}; + +Object.keys(tfjsCore).forEach(function (k) { + if (k !== 'default') Object.defineProperty(exports, k, { + enumerable: true, + get: function () { + return tfjsCore[k]; + } + }); +}); +Object.keys(tfjsLayers).forEach(function (k) { + if (k !== 'default') Object.defineProperty(exports, k, { + enumerable: true, + get: function () { + return tfjsLayers[k]; + } + }); +}); +Object.keys(tfjsConverter).forEach(function (k) { + if (k !== 'default') Object.defineProperty(exports, k, { + enumerable: true, + get: function () { + return tfjsConverter[k]; + } + }); +}); +exports.data = tfjsData; +exports.version = version$1; +//# sourceMappingURL=tf.node.js.map + + +/***/ }), +/* 61 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* WEBPACK VAR INJECTION */(function(process) {/* harmony import */ var _device_util__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(36); +/* harmony import */ var _environment__WEBPACK_IMPORTED_MODULE_1__ = __webpack_require__(10); +/** + * @license + * Copyright 2019 Google Inc. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + + +const ENV = Object(_environment__WEBPACK_IMPORTED_MODULE_1__[/* env */ "c"])(); +/** + * This file contains environment-related flag registrations. + */ +/** Whether to enable debug mode. */ +ENV.registerFlag('DEBUG', () => false, debugValue => { + if (debugValue) { + console.warn('Debugging mode is ON. The output of every math call will ' + + 'be downloaded to CPU and checked for NaNs. ' + + 'This significantly impacts performance.'); + } +}); +/** Whether we are in a browser (as versus, say, node.js) environment. */ +ENV.registerFlag('IS_BROWSER', () => _device_util__WEBPACK_IMPORTED_MODULE_0__["isBrowser"]()); +/** Whether we are in a browser (as versus, say, node.js) environment. */ +ENV.registerFlag('IS_NODE', () => (typeof process !== 'undefined') && + (typeof process.versions !== 'undefined') && + (typeof process.versions.node !== 'undefined')); +/** Whether this browser is Chrome. */ +ENV.registerFlag('IS_CHROME', () => typeof navigator !== 'undefined' && navigator != null && + navigator.userAgent != null && /Chrome/.test(navigator.userAgent) && + /Google Inc/.test(navigator.vendor)); +/** + * True when the environment is "production" where we disable safety checks + * to gain performance. + */ +ENV.registerFlag('PROD', () => false); +/** + * Whether to do sanity checks when inferring a shape from user-provided + * values, used when creating a new tensor. + */ +ENV.registerFlag('TENSORLIKE_CHECK_SHAPE_CONSISTENCY', () => ENV.getBool('DEBUG')); +/** Whether deprecation warnings are enabled. */ +ENV.registerFlag('DEPRECATION_WARNINGS_ENABLED', () => true); +/** True if running unit tests. */ +ENV.registerFlag('IS_TEST', () => false); +//# sourceMappingURL=flags.js.map +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(35))) + +/***/ }), +/* 62 */ +/***/ (function(module, __webpack_exports__, __webpack_require__) { + +"use strict"; +/* WEBPACK VAR INJECTION */(function(process) {/* unused harmony export getNodeFetch */ +/* unused harmony export resetSystemFetch */ +/* unused harmony export setSystemFetch */ +/* unused harmony export getSystemFetch */ +/* unused harmony export PlatformNode */ +/* harmony import */ var _environment__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(10); +/** + * @license + * Copyright 2019 Google LLC. All Rights Reserved. + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * ============================================================================= + */ + +// We are wrapping this within an object so it can be stubbed by Jasmine. +const getNodeFetch = { + // tslint:disable-next-line:no-require-imports + importFetch: () => __webpack_require__(63) +}; +let systemFetch; +// These getters and setters are for testing so we don't export a mutable +// variable. +function resetSystemFetch() { + systemFetch = null; +} +function setSystemFetch(fetchFn) { + systemFetch = fetchFn; +} +function getSystemFetch() { + return systemFetch; +} +class PlatformNode { + constructor() { + // tslint:disable-next-line:no-require-imports + this.util = __webpack_require__(64); + // According to the spec, the built-in encoder can do only UTF-8 encoding. + // https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder/TextEncoder + this.textEncoder = new this.util.TextEncoder(); + } + fetch(path, requestInits) { + if (Object(_environment__WEBPACK_IMPORTED_MODULE_0__[/* env */ "c"])().global.fetch != null) { + return Object(_environment__WEBPACK_IMPORTED_MODULE_0__[/* env */ "c"])().global.fetch(path, requestInits); + } + if (systemFetch == null) { + systemFetch = getNodeFetch.importFetch(); + } + return systemFetch(path, requestInits); + } + now() { + const time = process.hrtime(); + return time[0] * 1000 + time[1] / 1000000; + } + encode(text, encoding) { + if (encoding !== 'utf-8' && encoding !== 'utf8') { + throw new Error(`Node built-in encoder only supports utf-8, but got ${encoding}`); + } + return this.textEncoder.encode(text); + } + decode(bytes, encoding) { + if (bytes.length === 0) { + return ''; + } + return new this.util.TextDecoder(encoding).decode(bytes); + } +} +if (Object(_environment__WEBPACK_IMPORTED_MODULE_0__[/* env */ "c"])().get('IS_NODE')) { + Object(_environment__WEBPACK_IMPORTED_MODULE_0__[/* env */ "c"])().setPlatform('node', new PlatformNode()); +} +//# sourceMappingURL=platform_node.js.map +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(35))) + +/***/ }), +/* 63 */ +/***/ (function(module, exports) { + +/* (ignored) */ + +/***/ }), +/* 64 */ +/***/ (function(module, exports) { + +/* (ignored) */ + +/***/ }), +/* 65 */ +/***/ (function(module, exports, __webpack_require__) { + +"use strict"; + + +exports.byteLength = byteLength +exports.toByteArray = toByteArray +exports.fromByteArray = fromByteArray + +var lookup = [] +var revLookup = [] +var Arr = typeof Uint8Array !== 'undefined' ? Uint8Array : Array + +var code = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/' +for (var i = 0, len = code.length; i < len; ++i) { + lookup[i] = code[i] + revLookup[code.charCodeAt(i)] = i +} + +// Support decoding URL-safe base64 strings, as Node.js does. +// See: https://en.wikipedia.org/wiki/Base64#URL_applications +revLookup['-'.charCodeAt(0)] = 62 +revLookup['_'.charCodeAt(0)] = 63 + +function getLens (b64) { + var len = b64.length + + if (len % 4 > 0) { + throw new Error('Invalid string. Length must be a multiple of 4') + } + + // Trim off extra bytes after placeholder bytes are found + // See: https://github.com/beatgammit/base64-js/issues/42 + var validLen = b64.indexOf('=') + if (validLen === -1) validLen = len + + var placeHoldersLen = validLen === len + ? 0 + : 4 - (validLen % 4) + + return [validLen, placeHoldersLen] +} + +// base64 is 4/3 + up to two characters of the original data +function byteLength (b64) { + var lens = getLens(b64) + var validLen = lens[0] + var placeHoldersLen = lens[1] + return ((validLen + placeHoldersLen) * 3 / 4) - placeHoldersLen +} + +function _byteLength (b64, validLen, placeHoldersLen) { + return ((validLen + placeHoldersLen) * 3 / 4) - placeHoldersLen +} + +function toByteArray (b64) { + var tmp + var lens = getLens(b64) + var validLen = lens[0] + var placeHoldersLen = lens[1] + + var arr = new Arr(_byteLength(b64, validLen, placeHoldersLen)) + + var curByte = 0 + + // if there are placeholders, only get up to the last complete 4 chars + var len = placeHoldersLen > 0 + ? validLen - 4 + : validLen + + var i + for (i = 0; i < len; i += 4) { + tmp = + (revLookup[b64.charCodeAt(i)] << 18) | + (revLookup[b64.charCodeAt(i + 1)] << 12) | + (revLookup[b64.charCodeAt(i + 2)] << 6) | + revLookup[b64.charCodeAt(i + 3)] + arr[curByte++] = (tmp >> 16) & 0xFF + arr[curByte++] = (tmp >> 8) & 0xFF + arr[curByte++] = tmp & 0xFF + } + + if (placeHoldersLen === 2) { + tmp = + (revLookup[b64.charCodeAt(i)] << 2) | + (revLookup[b64.charCodeAt(i + 1)] >> 4) + arr[curByte++] = tmp & 0xFF + } + + if (placeHoldersLen === 1) { + tmp = + (revLookup[b64.charCodeAt(i)] << 10) | + (revLookup[b64.charCodeAt(i + 1)] << 4) | + (revLookup[b64.charCodeAt(i + 2)] >> 2) + arr[curByte++] = (tmp >> 8) & 0xFF + arr[curByte++] = tmp & 0xFF + } + + return arr +} + +function tripletToBase64 (num) { + return lookup[num >> 18 & 0x3F] + + lookup[num >> 12 & 0x3F] + + lookup[num >> 6 & 0x3F] + + lookup[num & 0x3F] +} + +function encodeChunk (uint8, start, end) { + var tmp + var output = [] + for (var i = start; i < end; i += 3) { + tmp = + ((uint8[i] << 16) & 0xFF0000) + + ((uint8[i + 1] << 8) & 0xFF00) + + (uint8[i + 2] & 0xFF) + output.push(tripletToBase64(tmp)) + } + return output.join('') +} + +function fromByteArray (uint8) { + var tmp + var len = uint8.length + var extraBytes = len % 3 // if we have 1 byte left, pad 2 bytes + var parts = [] + var maxChunkLength = 16383 // must be multiple of 3 + + // go through the array every three bytes, we'll deal with trailing stuff later + for (var i = 0, len2 = len - extraBytes; i < len2; i += maxChunkLength) { + parts.push(encodeChunk( + uint8, i, (i + maxChunkLength) > len2 ? len2 : (i + maxChunkLength) + )) + } + + // pad the end with zeros, but make sure to not forget the extra bytes + if (extraBytes === 1) { + tmp = uint8[len - 1] + parts.push( + lookup[tmp >> 2] + + lookup[(tmp << 4) & 0x3F] + + '==' + ) + } else if (extraBytes === 2) { + tmp = (uint8[len - 2] << 8) + uint8[len - 1] + parts.push( + lookup[tmp >> 10] + + lookup[(tmp >> 4) & 0x3F] + + lookup[(tmp << 2) & 0x3F] + + '=' + ) + } + + return parts.join('') +} + + +/***/ }), +/* 66 */ +/***/ (function(module, exports) { + +exports.read = function (buffer, offset, isLE, mLen, nBytes) { + var e, m + var eLen = (nBytes * 8) - mLen - 1 + var eMax = (1 << eLen) - 1 + var eBias = eMax >> 1 + var nBits = -7 + var i = isLE ? (nBytes - 1) : 0 + var d = isLE ? -1 : 1 + var s = buffer[offset + i] + + i += d + + e = s & ((1 << (-nBits)) - 1) + s >>= (-nBits) + nBits += eLen + for (; nBits > 0; e = (e * 256) + buffer[offset + i], i += d, nBits -= 8) {} + + m = e & ((1 << (-nBits)) - 1) + e >>= (-nBits) + nBits += mLen + for (; nBits > 0; m = (m * 256) + buffer[offset + i], i += d, nBits -= 8) {} + + if (e === 0) { + e = 1 - eBias + } else if (e === eMax) { + return m ? NaN : ((s ? -1 : 1) * Infinity) + } else { + m = m + Math.pow(2, mLen) + e = e - eBias + } + return (s ? -1 : 1) * m * Math.pow(2, e - mLen) +} + +exports.write = function (buffer, value, offset, isLE, mLen, nBytes) { + var e, m, c + var eLen = (nBytes * 8) - mLen - 1 + var eMax = (1 << eLen) - 1 + var eBias = eMax >> 1 + var rt = (mLen === 23 ? Math.pow(2, -24) - Math.pow(2, -77) : 0) + var i = isLE ? 0 : (nBytes - 1) + var d = isLE ? 1 : -1 + var s = value < 0 || (value === 0 && 1 / value < 0) ? 1 : 0 + + value = Math.abs(value) + + if (isNaN(value) || value === Infinity) { + m = isNaN(value) ? 1 : 0 + e = eMax + } else { + e = Math.floor(Math.log(value) / Math.LN2) + if (value * (c = Math.pow(2, -e)) < 1) { + e-- + c *= 2 + } + if (e + eBias >= 1) { + value += rt / c + } else { + value += rt * Math.pow(2, 1 - eBias) + } + if (value * c >= 2) { + e++ + c /= 2 + } + + if (e + eBias >= eMax) { + m = 0 + e = eMax + } else if (e + eBias >= 1) { + m = ((value * c) - 1) * Math.pow(2, mLen) + e = e + eBias + } else { + m = value * Math.pow(2, eBias - 1) * Math.pow(2, mLen) + e = 0 + } + } + + for (; mLen >= 8; buffer[offset + i] = m & 0xff, i += d, m /= 256, mLen -= 8) {} + + e = (e << mLen) | m + eLen += mLen + for (; eLen > 0; buffer[offset + i] = e & 0xff, i += d, e /= 256, eLen -= 8) {} + + buffer[offset + i - d] |= s * 128 +} + + +/***/ }), +/* 67 */ +/***/ (function(module, exports) { + +var toString = {}.toString; + +module.exports = Array.isArray || function (arr) { + return toString.call(arr) == '[object Array]'; +}; + + +/***/ }), +/* 68 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(module) {var __WEBPACK_AMD_DEFINE_RESULT__;// A port of an algorithm by Johannes Baagøe , 2010 +// http://baagoe.com/en/RandomMusings/javascript/ +// https://github.com/nquinlan/better-random-numbers-for-javascript-mirror +// Original work is under MIT license - + +// Copyright (C) 2010 by Johannes Baagøe +// +// Permission is hereby granted, free of charge, to any person obtaining a copy +// of this software and associated documentation files (the "Software"), to deal +// in the Software without restriction, including without limitation the rights +// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +// copies of the Software, and to permit persons to whom the Software is +// furnished to do so, subject to the following conditions: +// +// The above copyright notice and this permission notice shall be included in +// all copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +// THE SOFTWARE. + + + +(function(global, module, define) { + +function Alea(seed) { + var me = this, mash = Mash(); + + me.next = function() { + var t = 2091639 * me.s0 + me.c * 2.3283064365386963e-10; // 2^-32 + me.s0 = me.s1; + me.s1 = me.s2; + return me.s2 = t - (me.c = t | 0); + }; + + // Apply the seeding algorithm from Baagoe. + me.c = 1; + me.s0 = mash(' '); + me.s1 = mash(' '); + me.s2 = mash(' '); + me.s0 -= mash(seed); + if (me.s0 < 0) { me.s0 += 1; } + me.s1 -= mash(seed); + if (me.s1 < 0) { me.s1 += 1; } + me.s2 -= mash(seed); + if (me.s2 < 0) { me.s2 += 1; } + mash = null; +} + +function copy(f, t) { + t.c = f.c; + t.s0 = f.s0; + t.s1 = f.s1; + t.s2 = f.s2; + return t; +} + +function impl(seed, opts) { + var xg = new Alea(seed), + state = opts && opts.state, + prng = xg.next; + prng.int32 = function() { return (xg.next() * 0x100000000) | 0; } + prng.double = function() { + return prng() + (prng() * 0x200000 | 0) * 1.1102230246251565e-16; // 2^-53 + }; + prng.quick = prng; + if (state) { + if (typeof(state) == 'object') copy(state, xg); + prng.state = function() { return copy(xg, {}); } + } + return prng; +} + +function Mash() { + var n = 0xefc8249d; + + var mash = function(data) { + data = data.toString(); + for (var i = 0; i < data.length; i++) { + n += data.charCodeAt(i); + var h = 0.02519603282416938 * n; + n = h >>> 0; + h -= n; + h *= n; + n = h >>> 0; + h -= n; + n += h * 0x100000000; // 2^32 + } + return (n >>> 0) * 2.3283064365386963e-10; // 2^-32 + }; + + return mash; +} + + +if (module && module.exports) { + module.exports = impl; +} else if (__webpack_require__(16) && __webpack_require__(29)) { + !(__WEBPACK_AMD_DEFINE_RESULT__ = (function() { return impl; }).call(exports, __webpack_require__, exports, module), + __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)); +} else { + this.alea = impl; +} + +})( + this, + true && module, // present in node.js + __webpack_require__(16) // present with an AMD loader +); + + + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(28)(module))) + +/***/ }), +/* 69 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(module) {var __WEBPACK_AMD_DEFINE_RESULT__;// A Javascript implementaion of the "xor128" prng algorithm by +// George Marsaglia. See http://www.jstatsoft.org/v08/i14/paper + +(function(global, module, define) { + +function XorGen(seed) { + var me = this, strseed = ''; + + me.x = 0; + me.y = 0; + me.z = 0; + me.w = 0; + + // Set up generator function. + me.next = function() { + var t = me.x ^ (me.x << 11); + me.x = me.y; + me.y = me.z; + me.z = me.w; + return me.w ^= (me.w >>> 19) ^ t ^ (t >>> 8); + }; + + if (seed === (seed | 0)) { + // Integer seed. + me.x = seed; + } else { + // String seed. + strseed += seed; + } + + // Mix in string seed, then discard an initial batch of 64 values. + for (var k = 0; k < strseed.length + 64; k++) { + me.x ^= strseed.charCodeAt(k) | 0; + me.next(); + } +} + +function copy(f, t) { + t.x = f.x; + t.y = f.y; + t.z = f.z; + t.w = f.w; + return t; +} + +function impl(seed, opts) { + var xg = new XorGen(seed), + state = opts && opts.state, + prng = function() { return (xg.next() >>> 0) / 0x100000000; }; + prng.double = function() { + do { + var top = xg.next() >>> 11, + bot = (xg.next() >>> 0) / 0x100000000, + result = (top + bot) / (1 << 21); + } while (result === 0); + return result; + }; + prng.int32 = xg.next; + prng.quick = prng; + if (state) { + if (typeof(state) == 'object') copy(state, xg); + prng.state = function() { return copy(xg, {}); } + } + return prng; +} + +if (module && module.exports) { + module.exports = impl; +} else if (__webpack_require__(16) && __webpack_require__(29)) { + !(__WEBPACK_AMD_DEFINE_RESULT__ = (function() { return impl; }).call(exports, __webpack_require__, exports, module), + __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)); +} else { + this.xor128 = impl; +} + +})( + this, + true && module, // present in node.js + __webpack_require__(16) // present with an AMD loader +); + + + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(28)(module))) + +/***/ }), +/* 70 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(module) {var __WEBPACK_AMD_DEFINE_RESULT__;// A Javascript implementaion of the "xorwow" prng algorithm by +// George Marsaglia. See http://www.jstatsoft.org/v08/i14/paper + +(function(global, module, define) { + +function XorGen(seed) { + var me = this, strseed = ''; + + // Set up generator function. + me.next = function() { + var t = (me.x ^ (me.x >>> 2)); + me.x = me.y; me.y = me.z; me.z = me.w; me.w = me.v; + return (me.d = (me.d + 362437 | 0)) + + (me.v = (me.v ^ (me.v << 4)) ^ (t ^ (t << 1))) | 0; + }; + + me.x = 0; + me.y = 0; + me.z = 0; + me.w = 0; + me.v = 0; + + if (seed === (seed | 0)) { + // Integer seed. + me.x = seed; + } else { + // String seed. + strseed += seed; + } + + // Mix in string seed, then discard an initial batch of 64 values. + for (var k = 0; k < strseed.length + 64; k++) { + me.x ^= strseed.charCodeAt(k) | 0; + if (k == strseed.length) { + me.d = me.x << 10 ^ me.x >>> 4; + } + me.next(); + } +} + +function copy(f, t) { + t.x = f.x; + t.y = f.y; + t.z = f.z; + t.w = f.w; + t.v = f.v; + t.d = f.d; + return t; +} + +function impl(seed, opts) { + var xg = new XorGen(seed), + state = opts && opts.state, + prng = function() { return (xg.next() >>> 0) / 0x100000000; }; + prng.double = function() { + do { + var top = xg.next() >>> 11, + bot = (xg.next() >>> 0) / 0x100000000, + result = (top + bot) / (1 << 21); + } while (result === 0); + return result; + }; + prng.int32 = xg.next; + prng.quick = prng; + if (state) { + if (typeof(state) == 'object') copy(state, xg); + prng.state = function() { return copy(xg, {}); } + } + return prng; +} + +if (module && module.exports) { + module.exports = impl; +} else if (__webpack_require__(16) && __webpack_require__(29)) { + !(__WEBPACK_AMD_DEFINE_RESULT__ = (function() { return impl; }).call(exports, __webpack_require__, exports, module), + __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)); +} else { + this.xorwow = impl; +} + +})( + this, + true && module, // present in node.js + __webpack_require__(16) // present with an AMD loader +); + + + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(28)(module))) + +/***/ }), +/* 71 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(module) {var __WEBPACK_AMD_DEFINE_RESULT__;// A Javascript implementaion of the "xorshift7" algorithm by +// François Panneton and Pierre L'ecuyer: +// "On the Xorgshift Random Number Generators" +// http://saluc.engr.uconn.edu/refs/crypto/rng/panneton05onthexorshift.pdf + +(function(global, module, define) { + +function XorGen(seed) { + var me = this; + + // Set up generator function. + me.next = function() { + // Update xor generator. + var X = me.x, i = me.i, t, v, w; + t = X[i]; t ^= (t >>> 7); v = t ^ (t << 24); + t = X[(i + 1) & 7]; v ^= t ^ (t >>> 10); + t = X[(i + 3) & 7]; v ^= t ^ (t >>> 3); + t = X[(i + 4) & 7]; v ^= t ^ (t << 7); + t = X[(i + 7) & 7]; t = t ^ (t << 13); v ^= t ^ (t << 9); + X[i] = v; + me.i = (i + 1) & 7; + return v; + }; + + function init(me, seed) { + var j, w, X = []; + + if (seed === (seed | 0)) { + // Seed state array using a 32-bit integer. + w = X[0] = seed; + } else { + // Seed state using a string. + seed = '' + seed; + for (j = 0; j < seed.length; ++j) { + X[j & 7] = (X[j & 7] << 15) ^ + (seed.charCodeAt(j) + X[(j + 1) & 7] << 13); + } + } + // Enforce an array length of 8, not all zeroes. + while (X.length < 8) X.push(0); + for (j = 0; j < 8 && X[j] === 0; ++j); + if (j == 8) w = X[7] = -1; else w = X[j]; + + me.x = X; + me.i = 0; + + // Discard an initial 256 values. + for (j = 256; j > 0; --j) { + me.next(); + } + } + + init(me, seed); +} + +function copy(f, t) { + t.x = f.x.slice(); + t.i = f.i; + return t; +} + +function impl(seed, opts) { + if (seed == null) seed = +(new Date); + var xg = new XorGen(seed), + state = opts && opts.state, + prng = function() { return (xg.next() >>> 0) / 0x100000000; }; + prng.double = function() { + do { + var top = xg.next() >>> 11, + bot = (xg.next() >>> 0) / 0x100000000, + result = (top + bot) / (1 << 21); + } while (result === 0); + return result; + }; + prng.int32 = xg.next; + prng.quick = prng; + if (state) { + if (state.x) copy(state, xg); + prng.state = function() { return copy(xg, {}); } + } + return prng; +} + +if (module && module.exports) { + module.exports = impl; +} else if (__webpack_require__(16) && __webpack_require__(29)) { + !(__WEBPACK_AMD_DEFINE_RESULT__ = (function() { return impl; }).call(exports, __webpack_require__, exports, module), + __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)); +} else { + this.xorshift7 = impl; +} + +})( + this, + true && module, // present in node.js + __webpack_require__(16) // present with an AMD loader +); + + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(28)(module))) + +/***/ }), +/* 72 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(module) {var __WEBPACK_AMD_DEFINE_RESULT__;// A Javascript implementaion of Richard Brent's Xorgens xor4096 algorithm. +// +// This fast non-cryptographic random number generator is designed for +// use in Monte-Carlo algorithms. It combines a long-period xorshift +// generator with a Weyl generator, and it passes all common batteries +// of stasticial tests for randomness while consuming only a few nanoseconds +// for each prng generated. For background on the generator, see Brent's +// paper: "Some long-period random number generators using shifts and xors." +// http://arxiv.org/pdf/1004.3115v1.pdf +// +// Usage: +// +// var xor4096 = require('xor4096'); +// random = xor4096(1); // Seed with int32 or string. +// assert.equal(random(), 0.1520436450538547); // (0, 1) range, 53 bits. +// assert.equal(random.int32(), 1806534897); // signed int32, 32 bits. +// +// For nonzero numeric keys, this impelementation provides a sequence +// identical to that by Brent's xorgens 3 implementaion in C. This +// implementation also provides for initalizing the generator with +// string seeds, or for saving and restoring the state of the generator. +// +// On Chrome, this prng benchmarks about 2.1 times slower than +// Javascript's built-in Math.random(). + +(function(global, module, define) { + +function XorGen(seed) { + var me = this; + + // Set up generator function. + me.next = function() { + var w = me.w, + X = me.X, i = me.i, t, v; + // Update Weyl generator. + me.w = w = (w + 0x61c88647) | 0; + // Update xor generator. + v = X[(i + 34) & 127]; + t = X[i = ((i + 1) & 127)]; + v ^= v << 13; + t ^= t << 17; + v ^= v >>> 15; + t ^= t >>> 12; + // Update Xor generator array state. + v = X[i] = v ^ t; + me.i = i; + // Result is the combination. + return (v + (w ^ (w >>> 16))) | 0; + }; + + function init(me, seed) { + var t, v, i, j, w, X = [], limit = 128; + if (seed === (seed | 0)) { + // Numeric seeds initialize v, which is used to generates X. + v = seed; + seed = null; + } else { + // String seeds are mixed into v and X one character at a time. + seed = seed + '\0'; + v = 0; + limit = Math.max(limit, seed.length); + } + // Initialize circular array and weyl value. + for (i = 0, j = -32; j < limit; ++j) { + // Put the unicode characters into the array, and shuffle them. + if (seed) v ^= seed.charCodeAt((j + 32) % seed.length); + // After 32 shuffles, take v as the starting w value. + if (j === 0) w = v; + v ^= v << 10; + v ^= v >>> 15; + v ^= v << 4; + v ^= v >>> 13; + if (j >= 0) { + w = (w + 0x61c88647) | 0; // Weyl. + t = (X[j & 127] ^= (v + w)); // Combine xor and weyl to init array. + i = (0 == t) ? i + 1 : 0; // Count zeroes. + } + } + // We have detected all zeroes; make the key nonzero. + if (i >= 128) { + X[(seed && seed.length || 0) & 127] = -1; + } + // Run the generator 512 times to further mix the state before using it. + // Factoring this as a function slows the main generator, so it is just + // unrolled here. The weyl generator is not advanced while warming up. + i = 127; + for (j = 4 * 128; j > 0; --j) { + v = X[(i + 34) & 127]; + t = X[i = ((i + 1) & 127)]; + v ^= v << 13; + t ^= t << 17; + v ^= v >>> 15; + t ^= t >>> 12; + X[i] = v ^ t; + } + // Storing state as object members is faster than using closure variables. + me.w = w; + me.X = X; + me.i = i; + } + + init(me, seed); +} + +function copy(f, t) { + t.i = f.i; + t.w = f.w; + t.X = f.X.slice(); + return t; +}; + +function impl(seed, opts) { + if (seed == null) seed = +(new Date); + var xg = new XorGen(seed), + state = opts && opts.state, + prng = function() { return (xg.next() >>> 0) / 0x100000000; }; + prng.double = function() { + do { + var top = xg.next() >>> 11, + bot = (xg.next() >>> 0) / 0x100000000, + result = (top + bot) / (1 << 21); + } while (result === 0); + return result; + }; + prng.int32 = xg.next; + prng.quick = prng; + if (state) { + if (state.X) copy(state, xg); + prng.state = function() { return copy(xg, {}); } + } + return prng; +} + +if (module && module.exports) { + module.exports = impl; +} else if (__webpack_require__(16) && __webpack_require__(29)) { + !(__WEBPACK_AMD_DEFINE_RESULT__ = (function() { return impl; }).call(exports, __webpack_require__, exports, module), + __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)); +} else { + this.xor4096 = impl; +} + +})( + this, // window object or global + true && module, // present in node.js + __webpack_require__(16) // present with an AMD loader +); + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(28)(module))) + +/***/ }), +/* 73 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(module) {var __WEBPACK_AMD_DEFINE_RESULT__;// A Javascript implementaion of the "Tyche-i" prng algorithm by +// Samuel Neves and Filipe Araujo. +// See https://eden.dei.uc.pt/~sneves/pubs/2011-snfa2.pdf + +(function(global, module, define) { + +function XorGen(seed) { + var me = this, strseed = ''; + + // Set up generator function. + me.next = function() { + var b = me.b, c = me.c, d = me.d, a = me.a; + b = (b << 25) ^ (b >>> 7) ^ c; + c = (c - d) | 0; + d = (d << 24) ^ (d >>> 8) ^ a; + a = (a - b) | 0; + me.b = b = (b << 20) ^ (b >>> 12) ^ c; + me.c = c = (c - d) | 0; + me.d = (d << 16) ^ (c >>> 16) ^ a; + return me.a = (a - b) | 0; + }; + + /* The following is non-inverted tyche, which has better internal + * bit diffusion, but which is about 25% slower than tyche-i in JS. + me.next = function() { + var a = me.a, b = me.b, c = me.c, d = me.d; + a = (me.a + me.b | 0) >>> 0; + d = me.d ^ a; d = d << 16 ^ d >>> 16; + c = me.c + d | 0; + b = me.b ^ c; b = b << 12 ^ d >>> 20; + me.a = a = a + b | 0; + d = d ^ a; me.d = d = d << 8 ^ d >>> 24; + me.c = c = c + d | 0; + b = b ^ c; + return me.b = (b << 7 ^ b >>> 25); + } + */ + + me.a = 0; + me.b = 0; + me.c = 2654435769 | 0; + me.d = 1367130551; + + if (seed === Math.floor(seed)) { + // Integer seed. + me.a = (seed / 0x100000000) | 0; + me.b = seed | 0; + } else { + // String seed. + strseed += seed; + } + + // Mix in string seed, then discard an initial batch of 64 values. + for (var k = 0; k < strseed.length + 20; k++) { + me.b ^= strseed.charCodeAt(k) | 0; + me.next(); + } +} + +function copy(f, t) { + t.a = f.a; + t.b = f.b; + t.c = f.c; + t.d = f.d; + return t; +}; + +function impl(seed, opts) { + var xg = new XorGen(seed), + state = opts && opts.state, + prng = function() { return (xg.next() >>> 0) / 0x100000000; }; + prng.double = function() { + do { + var top = xg.next() >>> 11, + bot = (xg.next() >>> 0) / 0x100000000, + result = (top + bot) / (1 << 21); + } while (result === 0); + return result; + }; + prng.int32 = xg.next; + prng.quick = prng; + if (state) { + if (typeof(state) == 'object') copy(state, xg); + prng.state = function() { return copy(xg, {}); } + } + return prng; +} + +if (module && module.exports) { + module.exports = impl; +} else if (__webpack_require__(16) && __webpack_require__(29)) { + !(__WEBPACK_AMD_DEFINE_RESULT__ = (function() { return impl; }).call(exports, __webpack_require__, exports, module), + __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)); +} else { + this.tychei = impl; +} + +})( + this, + true && module, // present in node.js + __webpack_require__(16) // present with an AMD loader +); + + + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(28)(module))) + +/***/ }), +/* 74 */ +/***/ (function(module, exports, __webpack_require__) { + +var __WEBPACK_AMD_DEFINE_RESULT__;/* +Copyright 2014 David Bau. + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so, subject to +the following conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. +IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, +TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +*/ + +(function (pool, math) { +// +// The following constants are related to IEEE 754 limits. +// +var global = this, + width = 256, // each RC4 output is 0 <= x < 256 + chunks = 6, // at least six RC4 outputs for each double + digits = 52, // there are 52 significant digits in a double + rngname = 'random', // rngname: name for Math.random and Math.seedrandom + startdenom = math.pow(width, chunks), + significance = math.pow(2, digits), + overflow = significance * 2, + mask = width - 1, + nodecrypto; // node.js crypto module, initialized at the bottom. + +// +// seedrandom() +// This is the seedrandom function described above. +// +function seedrandom(seed, options, callback) { + var key = []; + options = (options == true) ? { entropy: true } : (options || {}); + + // Flatten the seed string or build one from local entropy if needed. + var shortseed = mixkey(flatten( + options.entropy ? [seed, tostring(pool)] : + (seed == null) ? autoseed() : seed, 3), key); + + // Use the seed to initialize an ARC4 generator. + var arc4 = new ARC4(key); + + // This function returns a random double in [0, 1) that contains + // randomness in every bit of the mantissa of the IEEE 754 value. + var prng = function() { + var n = arc4.g(chunks), // Start with a numerator n < 2 ^ 48 + d = startdenom, // and denominator d = 2 ^ 48. + x = 0; // and no 'extra last byte'. + while (n < significance) { // Fill up all significant digits by + n = (n + x) * width; // shifting numerator and + d *= width; // denominator and generating a + x = arc4.g(1); // new least-significant-byte. + } + while (n >= overflow) { // To avoid rounding up, before adding + n /= 2; // last byte, shift everything + d /= 2; // right using integer math until + x >>>= 1; // we have exactly the desired bits. + } + return (n + x) / d; // Form the number within [0, 1). + }; + + prng.int32 = function() { return arc4.g(4) | 0; } + prng.quick = function() { return arc4.g(4) / 0x100000000; } + prng.double = prng; + + // Mix the randomness into accumulated entropy. + mixkey(tostring(arc4.S), pool); + + // Calling convention: what to return as a function of prng, seed, is_math. + return (options.pass || callback || + function(prng, seed, is_math_call, state) { + if (state) { + // Load the arc4 state from the given state if it has an S array. + if (state.S) { copy(state, arc4); } + // Only provide the .state method if requested via options.state. + prng.state = function() { return copy(arc4, {}); } + } + + // If called as a method of Math (Math.seedrandom()), mutate + // Math.random because that is how seedrandom.js has worked since v1.0. + if (is_math_call) { math[rngname] = prng; return seed; } + + // Otherwise, it is a newer calling convention, so return the + // prng directly. + else return prng; + })( + prng, + shortseed, + 'global' in options ? options.global : (this == math), + options.state); +} +math['seed' + rngname] = seedrandom; + +// +// ARC4 +// +// An ARC4 implementation. The constructor takes a key in the form of +// an array of at most (width) integers that should be 0 <= x < (width). +// +// The g(count) method returns a pseudorandom integer that concatenates +// the next (count) outputs from ARC4. Its return value is a number x +// that is in the range 0 <= x < (width ^ count). +// +function ARC4(key) { + var t, keylen = key.length, + me = this, i = 0, j = me.i = me.j = 0, s = me.S = []; + + // The empty key [] is treated as [0]. + if (!keylen) { key = [keylen++]; } + + // Set up S using the standard key scheduling algorithm. + while (i < width) { + s[i] = i++; + } + for (i = 0; i < width; i++) { + s[i] = s[j = mask & (j + key[i % keylen] + (t = s[i]))]; + s[j] = t; + } + + // The "g" method returns the next (count) outputs as one number. + (me.g = function(count) { + // Using instance members instead of closure state nearly doubles speed. + var t, r = 0, + i = me.i, j = me.j, s = me.S; + while (count--) { + t = s[i = mask & (i + 1)]; + r = r * width + s[mask & ((s[i] = s[j = mask & (j + t)]) + (s[j] = t))]; + } + me.i = i; me.j = j; + return r; + // For robust unpredictability, the function call below automatically + // discards an initial batch of values. This is called RC4-drop[256]. + // See http://google.com/search?q=rsa+fluhrer+response&btnI + })(width); +} + +// +// copy() +// Copies internal state of ARC4 to or from a plain object. +// +function copy(f, t) { + t.i = f.i; + t.j = f.j; + t.S = f.S.slice(); + return t; +}; + +// +// flatten() +// Converts an object tree to nested arrays of strings. +// +function flatten(obj, depth) { + var result = [], typ = (typeof obj), prop; + if (depth && typ == 'object') { + for (prop in obj) { + try { result.push(flatten(obj[prop], depth - 1)); } catch (e) {} + } + } + return (result.length ? result : typ == 'string' ? obj : obj + '\0'); +} + +// +// mixkey() +// Mixes a string seed into a key that is an array of integers, and +// returns a shortened string seed that is equivalent to the result key. +// +function mixkey(seed, key) { + var stringseed = seed + '', smear, j = 0; + while (j < stringseed.length) { + key[mask & j] = + mask & ((smear ^= key[mask & j] * 19) + stringseed.charCodeAt(j++)); + } + return tostring(key); +} + +// +// autoseed() +// Returns an object for autoseeding, using window.crypto and Node crypto +// module if available. +// +function autoseed() { + try { + var out; + if (nodecrypto && (out = nodecrypto.randomBytes)) { + // The use of 'out' to remember randomBytes makes tight minified code. + out = out(width); + } else { + out = new Uint8Array(width); + (global.crypto || global.msCrypto).getRandomValues(out); + } + return tostring(out); + } catch (e) { + var browser = global.navigator, + plugins = browser && browser.plugins; + return [+new Date, global, plugins, global.screen, tostring(pool)]; + } +} + +// +// tostring() +// Converts an array of charcodes to a string +// +function tostring(a) { + return String.fromCharCode.apply(0, a); +} + +// +// When seedrandom.js is loaded, we immediately mix a few bits +// from the built-in RNG into the entropy pool. Because we do +// not want to interfere with deterministic PRNG state later, +// seedrandom will not call math.random on its own again after +// initialization. +// +mixkey(math.random(), pool); + +// +// Nodejs and AMD support: export the implementation as a module using +// either convention. +// +if ( true && module.exports) { + module.exports = seedrandom; + // When in node.js, try using crypto package for autoseeding. + try { + nodecrypto = __webpack_require__(75); + } catch (ex) {} +} else if (true) { + !(__WEBPACK_AMD_DEFINE_RESULT__ = (function() { return seedrandom; }).call(exports, __webpack_require__, exports, module), + __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__)); +} + +// End anonymous scope, and pass initial values. +})( + [], // pool: entropy pool starts empty + Math // math: package containing random, pow, and seedrandom +); + + +/***/ }), +/* 75 */ +/***/ (function(module, exports) { + +/* (ignored) */ + +/***/ }), +/* 76 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(global) {var scope = (typeof global !== "undefined" && global) || + (typeof self !== "undefined" && self) || + window; +var apply = Function.prototype.apply; + +// DOM APIs, for completeness + +exports.setTimeout = function() { + return new Timeout(apply.call(setTimeout, scope, arguments), clearTimeout); +}; +exports.setInterval = function() { + return new Timeout(apply.call(setInterval, scope, arguments), clearInterval); +}; +exports.clearTimeout = +exports.clearInterval = function(timeout) { + if (timeout) { + timeout.close(); + } +}; + +function Timeout(id, clearFn) { + this._id = id; + this._clearFn = clearFn; +} +Timeout.prototype.unref = Timeout.prototype.ref = function() {}; +Timeout.prototype.close = function() { + this._clearFn.call(scope, this._id); +}; + +// Does not start the time, just sets up the members needed. +exports.enroll = function(item, msecs) { + clearTimeout(item._idleTimeoutId); + item._idleTimeout = msecs; +}; + +exports.unenroll = function(item) { + clearTimeout(item._idleTimeoutId); + item._idleTimeout = -1; +}; + +exports._unrefActive = exports.active = function(item) { + clearTimeout(item._idleTimeoutId); + + var msecs = item._idleTimeout; + if (msecs >= 0) { + item._idleTimeoutId = setTimeout(function onTimeout() { + if (item._onTimeout) + item._onTimeout(); + }, msecs); + } +}; + +// setimmediate attaches itself to the global object +__webpack_require__(77); +// On some exotic environments, it's not clear which object `setimmediate` was +// able to install onto. Search each possibility in the same order as the +// `setimmediate` library. +exports.setImmediate = (typeof self !== "undefined" && self.setImmediate) || + (typeof global !== "undefined" && global.setImmediate) || + (this && this.setImmediate); +exports.clearImmediate = (typeof self !== "undefined" && self.clearImmediate) || + (typeof global !== "undefined" && global.clearImmediate) || + (this && this.clearImmediate); + +/* WEBPACK VAR INJECTION */}.call(this, __webpack_require__(27))) + +/***/ }), +/* 77 */ +/***/ (function(module, exports, __webpack_require__) { + +/* WEBPACK VAR INJECTION */(function(global, process) {(function (global, undefined) { + "use strict"; + + if (global.setImmediate) { + return; + } + + var nextHandle = 1; // Spec says greater than zero + var tasksByHandle = {}; + var currentlyRunningATask = false; + var doc = global.document; + var registerImmediate; + + function setImmediate(callback) { + // Callback can either be a function or a string + if (typeof callback !== "function") { + callback = new Function("" + callback); + } + // Copy function arguments + var args = new Array(arguments.length - 1); + for (var i = 0; i < args.length; i++) { + args[i] = arguments[i + 1]; + } + // Store and register the task + var task = { callback: callback, args: args }; + tasksByHandle[nextHandle] = task; + registerImmediate(nextHandle); + return nextHandle++; + } + + function clearImmediate(handle) { + delete tasksByHandle[handle]; + } + + function run(task) { + var callback = task.callback; + var args = task.args; + switch (args.length) { + case 0: + callback(); + break; + case 1: + callback(args[0]); + break; + case 2: + callback(args[0], args[1]); + break; + case 3: + callback(args[0], args[1], args[2]); + break; + default: + callback.apply(undefined, args); + break; + } + } + + function runIfPresent(handle) { + // From the spec: "Wait until any invocations of this algorithm started before this one have completed." + // So if we're currently running a task, we'll need to delay this invocation. + if (currentlyRunningATask) { + // Delay by doing a setTimeout. setImmediate was tried instead, but in Firefox 7 it generated a + // "too much recursion" error. + setTimeout(runIfPresent, 0, handle); + } else { + var task = tasksByHandle[handle]; + if (task) { + currentlyRunningATask = true; + try { + run(task); + } finally { + clearImmediate(handle); + currentlyRunningATask = false; + } + } + } + } + + function installNextTickImplementation() { + registerImmediate = function(handle) { + process.nextTick(function () { runIfPresent(handle); }); + }; + } + + function canUsePostMessage() { + // The test against `importScripts` prevents this implementation from being installed inside a web worker, + // where `global.postMessage` means something completely different and can't be used for this purpose. + if (global.postMessage && !global.importScripts) { + var postMessageIsAsynchronous = true; + var oldOnMessage = global.onmessage; + global.onmessage = function() { + postMessageIsAsynchronous = false; + }; + global.postMessage("", "*"); + global.onmessage = oldOnMessage; + return postMessageIsAsynchronous; + } + } + + function installPostMessageImplementation() { + // Installs an event handler on `global` for the `message` event: see + // * https://developer.mozilla.org/en/DOM/window.postMessage + // * http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.html#crossDocumentMessages + + var messagePrefix = "setImmediate$" + Math.random() + "$"; + var onGlobalMessage = function(event) { + if (event.source === global && + typeof event.data === "string" && + event.data.indexOf(messagePrefix) === 0) { + runIfPresent(+event.data.slice(messagePrefix.length)); + } + }; + + if (global.addEventListener) { + global.addEventListener("message", onGlobalMessage, false); + } else { + global.attachEvent("onmessage", onGlobalMessage); + } + + registerImmediate = function(handle) { + global.postMessage(messagePrefix + handle, "*"); + }; + } + + function installMessageChannelImplementation() { + var channel = new MessageChannel(); + channel.port1.onmessage = function(event) { + var handle = event.data; + runIfPresent(handle); + }; + + registerImmediate = function(handle) { + channel.port2.postMessage(handle); + }; + } + + function installReadyStateChangeImplementation() { + var html = doc.documentElement; + registerImmediate = function(handle) { + // Create a - + - + + - + + - + + - + + - + + + + + + + \ No newline at end of file diff --git a/examples/jspsych-canvas-keyboard-response.html b/examples/jspsych-canvas-keyboard-response.html new file mode 100644 index 0000000000..560546a832 --- /dev/null +++ b/examples/jspsych-canvas-keyboard-response.html @@ -0,0 +1,78 @@ + + + + + + + + + + \ No newline at end of file diff --git a/examples/jspsych-canvas-slider-response.html b/examples/jspsych-canvas-slider-response.html new file mode 100644 index 0000000000..5ae17a67f6 --- /dev/null +++ b/examples/jspsych-canvas-slider-response.html @@ -0,0 +1,67 @@ + + + + + + + + + + \ No newline at end of file diff --git a/examples/jspsych-categorize-animation.html b/examples/jspsych-categorize-animation.html index ca999e655a..727fe3c935 100644 --- a/examples/jspsych-categorize-animation.html +++ b/examples/jspsych-categorize-animation.html @@ -4,30 +4,41 @@ - + + - - + - + + + @@ -39,4 +34,4 @@ - \ No newline at end of file + diff --git a/examples/jspsych-free-sort.html b/examples/jspsych-free-sort.html index 5b91d33f9b..58137cef0a 100644 --- a/examples/jspsych-free-sort.html +++ b/examples/jspsych-free-sort.html @@ -3,19 +3,104 @@ - + + + - - + + - - + - + - + + + - - + + diff --git a/examples/jspsych-image-slider-response.html b/examples/jspsych-image-slider-response.html index 40ebba32d0..dc5deffa47 100644 --- a/examples/jspsych-image-slider-response.html +++ b/examples/jspsych-image-slider-response.html @@ -3,30 +3,62 @@ - - + + - + + + + + + + \ No newline at end of file diff --git a/examples/jspsych-preload.html b/examples/jspsych-preload.html new file mode 100644 index 0000000000..60c2b14bab --- /dev/null +++ b/examples/jspsych-preload.html @@ -0,0 +1,140 @@ + + + + + + + + + + + + + diff --git a/examples/jspsych-reconstruction.html b/examples/jspsych-reconstruction.html index 926df6040a..706ab83410 100644 --- a/examples/jspsych-reconstruction.html +++ b/examples/jspsych-reconstruction.html @@ -4,7 +4,7 @@ - + - + diff --git a/examples/jspsych-same-different-html.html b/examples/jspsych-same-different-html.html index 82ed854b74..fd7021c77d 100644 --- a/examples/jspsych-same-different-html.html +++ b/examples/jspsych-same-different-html.html @@ -4,16 +4,16 @@ - + - + + @@ -18,7 +18,7 @@ var instructions = { type: 'html-button-response', stimulus: '

Each screen will show either an English word or letters that do not form a word.

'+ - '

Press Y if the letters form a valid word.

Press N if the letters do not form a valid word.

', + '

Press y if the letters form a valid word.

Press n if the letters do not form a valid word.

', choices: ['Ready to start'] } timeline.push(instructions); @@ -72,25 +72,27 @@ timeline: [ { type: 'html-keyboard-response', - stimulus: '

+

', + stimulus: '+', choices: jsPsych.NO_KEYS, trial_duration: 500, - post_trial_gap: 0 + post_trial_gap: 0, + css_classes: ['stimulus'] }, { type: 'html-keyboard-response', - stimulus: function(){ return "

"+jsPsych.timelineVariable('word', true)+"

"; }, + stimulus: jsPsych.timelineVariable('word'), choices: ['y','n'], post_trial_gap: 0, + css_classes: ['stimulus'], data: { word_validity: jsPsych.timelineVariable('word_validity'), word_frequency: jsPsych.timelineVariable('word_frequency') }, on_finish: function(data){ if(data.word_validity == 'valid'){ - var correct = data.key_press == jsPsych.pluginAPI.convertKeyCharacterToKeyCode('y'); + var correct = jsPsych.pluginAPI.compareKeys(data.response, 'y'); } else { - var correct = data.key_press == jsPsych.pluginAPI.convertKeyCharacterToKeyCode('n'); + var correct = jsPsych.pluginAPI.compareKeys(data.response, 'n'); } data.correct = correct; } @@ -111,7 +113,7 @@ "

Your average correct response time for low frequency English words was "+Math.round(low_rt)+"ms.

"+ "

The typical pattern of results is that people are faster to respond to high frequency (common) "+ "word than low frequency (uncommon) words.

"+ - "

Press C to see the entire set of data generated by this experiment.

"; + "

Press c to see the entire set of data generated by this experiment.

"; return message; diff --git a/examples/manual-preloading.html b/examples/manual-preloading.html index d0e84ac203..f820329e46 100644 --- a/examples/manual-preloading.html +++ b/examples/manual-preloading.html @@ -2,17 +2,16 @@ + + - - diff --git a/examples/pause-unpause.html b/examples/pause-unpause.html index da01943352..0f8dced0f6 100644 --- a/examples/pause-unpause.html +++ b/examples/pause-unpause.html @@ -4,7 +4,7 @@ - + - - + + + + + + + + + + \ No newline at end of file diff --git a/examples/timeline-variables-sampling.html b/examples/timeline-variables-sampling.html index 1ec151bf3f..e2700b9d9d 100644 --- a/examples/timeline-variables-sampling.html +++ b/examples/timeline-variables-sampling.html @@ -4,7 +4,7 @@ - + diff --git a/examples/timeline-variables.html b/examples/timeline-variables.html index a4d148c63b..31d08fe405 100644 --- a/examples/timeline-variables.html +++ b/examples/timeline-variables.html @@ -5,20 +5,23 @@ - - - + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/examples/webgazer_audio.html b/examples/webgazer_audio.html new file mode 100644 index 0000000000..d781e1f7de --- /dev/null +++ b/examples/webgazer_audio.html @@ -0,0 +1,90 @@ + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/examples/webgazer_image.html b/examples/webgazer_image.html new file mode 100644 index 0000000000..0b5e3a1bdf --- /dev/null +++ b/examples/webgazer_image.html @@ -0,0 +1,60 @@ + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/extensions/jspsych-ext-webgazer.js b/extensions/jspsych-ext-webgazer.js new file mode 100644 index 0000000000..cd44721467 --- /dev/null +++ b/extensions/jspsych-ext-webgazer.js @@ -0,0 +1,265 @@ +jsPsych.extensions['webgazer'] = (function () { + + var extension = {}; + + // private state for the extension + // extension authors can define public functions to interact + // with the state. recommend not exposing state directly + // so that state manipulations are checked. + var state = {}; + + // required, will be called at jsPsych.init + // should return a Promise + extension.initialize = function (params) { + // setting default values for params if not defined + params.round_predictions = typeof params.round_predictions === 'undefined' ? true : params.round_predictions; + params.auto_initialize = typeof params.auto_initialize === 'undefined' ? false : params.auto_initialize; + params.sampling_interval = typeof params.sampling_interval === 'undefined' ? 34 : params.sampling_interval; + + return new Promise(function (resolve, reject) { + if (typeof params.webgazer === 'undefined') { + if (window.webgazer) { + state.webgazer = window.webgazer; + } else { + reject(new Error('Webgazer extension failed to initialize. webgazer.js not loaded. Load webgazer.js before calling jsPsych.init()')); + } + } else { + state.webgazer = params.webgazer; + } + + // sets up event handler for webgazer data + state.webgazer.setGazeListener(handleGazeDataUpdate); + + // default to threadedRidge regression + // NEVER MIND... kalman filter is too useful. + //state.webgazer.workerScriptURL = 'js/webgazer/ridgeWorker.mjs'; + //state.webgazer.setRegression('threadedRidge'); + //state.webgazer.applyKalmanFilter(false); // kalman filter doesn't seem to work yet with threadedridge. + + // set state parameters + state.round_predictions = params.round_predictions; + state.sampling_interval = params.sampling_interval; + + // sets state for initialization + state.initialized = false; + state.activeTrial = false; + state.gazeUpdateCallbacks = []; + state.domObserver = new MutationObserver(mutationObserverCallback); + + // hide video by default + extension.hideVideo(); + + // hide predictions by default + extension.hidePredictions(); + + if (params.auto_initialize) { + // starts webgazer, and once it initializes we stop mouseCalibration and + // pause webgazer data. + state.webgazer.begin().then(function () { + state.initialized = true; + extension.stopMouseCalibration(); + extension.pause(); + resolve(); + }).catch(function (error) { + console.error(error); + reject(error); + }); + } else { + resolve(); + } + }) + } + + // required, will be called when the trial starts (before trial loads) + extension.on_start = function (params) { + state.currentTrialData = []; + state.currentTrialTargets = {}; + state.currentTrialSelectors = params.targets; + + state.domObserver.observe(jsPsych.getDisplayElement(), {childList: true}) + + } + + // required will be called when the trial loads + extension.on_load = function (params) { + + // set current trial start time + state.currentTrialStart = performance.now(); + + // resume data collection + // state.webgazer.resume(); + + extension.startSampleInterval(); + + // set internal flag + state.activeTrial = true; + } + + // required, will be called when jsPsych.finishTrial() is called + // must return data object to be merged into data. + extension.on_finish = function (params) { + + // pause the eye tracker + extension.stopSampleInterval(); + + // stop watching the DOM + state.domObserver.disconnect(); + + // state.webgazer.pause(); + + // set internal flag + state.activeTrial = false; + + // send back the gazeData + return { + webgazer_data: state.currentTrialData, + webgazer_targets: state.currentTrialTargets + } + } + + extension.start = function () { + if(typeof state.webgazer == 'undefined'){ + console.error('Failed to start webgazer. Things to check: Is webgazer.js loaded? Is the webgazer extension included in jsPsych.init?') + return; + } + return new Promise(function (resolve, reject) { + state.webgazer.begin().then(function () { + state.initialized = true; + extension.stopMouseCalibration(); + extension.pause(); + resolve(); + }).catch(function (error) { + console.error(error); + reject(error); + }); + }); + } + + extension.startSampleInterval = function(interval){ + interval = typeof interval == 'undefined' ? state.sampling_interval : interval; + state.gazeInterval = setInterval(function(){ + state.webgazer.getCurrentPrediction().then(handleGazeDataUpdate); + }, state.sampling_interval); + // repeat the call here so that we get one immediate execution. above will not + // start until state.sampling_interval is reached the first time. + state.webgazer.getCurrentPrediction().then(handleGazeDataUpdate); + } + + extension.stopSampleInterval = function(){ + clearInterval(state.gazeInterval); + } + + extension.isInitialized = function(){ + return state.initialized; + } + + extension.faceDetected = function () { + return state.webgazer.getTracker().predictionReady; + } + + extension.showPredictions = function () { + state.webgazer.showPredictionPoints(true); + } + + extension.hidePredictions = function () { + state.webgazer.showPredictionPoints(false); + } + + extension.showVideo = function () { + state.webgazer.showVideo(true); + state.webgazer.showFaceOverlay(true); + state.webgazer.showFaceFeedbackBox(true); + } + + extension.hideVideo = function () { + state.webgazer.showVideo(false); + state.webgazer.showFaceOverlay(false); + state.webgazer.showFaceFeedbackBox(false); + } + + extension.resume = function () { + state.webgazer.resume(); + } + + extension.pause = function () { + state.webgazer.pause(); + // sometimes gaze dot will show and freeze after pause? + if(document.querySelector('#webgazerGazeDot')){ + document.querySelector('#webgazerGazeDot').style.display = 'none'; + } + } + + extension.resetCalibration = function(){ + state.webgazer.clearData(); + } + + extension.stopMouseCalibration = function () { + state.webgazer.removeMouseEventListeners() + } + + extension.startMouseCalibration = function () { + state.webgazer.addMouseEventListeners() + } + + extension.calibratePoint = function (x, y) { + state.webgazer.recordScreenPosition(x, y, 'click'); + } + + extension.setRegressionType = function (regression_type) { + var valid_regression_models = ['ridge', 'weightedRidge', 'threadedRidge']; + if (valid_regression_models.includes(regression_type)) { + state.webgazer.setRegression(regression_type) + } else { + console.warn('Invalid regression_type parameter for webgazer.setRegressionType. Valid options are ridge, weightedRidge, and threadedRidge.') + } + } + + extension.getCurrentPrediction = function () { + return state.webgazer.getCurrentPrediction(); + } + + extension.onGazeUpdate = function(callback){ + state.gazeUpdateCallbacks.push(callback); + return function(){ + state.gazeUpdateCallbacks = state.gazeUpdateCallbacks.filter(function(item){ + return item !== callback; + }); + } + } + + function handleGazeDataUpdate(gazeData, elapsedTime) { + if (gazeData !== null){ + var d = { + x: state.round_predictions ? Math.round(gazeData.x) : gazeData.x, + y: state.round_predictions ? Math.round(gazeData.y) : gazeData.y, + t: gazeData.t + } + if(state.activeTrial) { + //console.log(`handleUpdate: t = ${Math.round(gazeData.t)}, now = ${Math.round(performance.now())}`); + d.t = Math.round(gazeData.t - state.currentTrialStart) + state.currentTrialData.push(d); // add data to current trial's data + } + state.currentGaze = d; + for(var i=0; i tag and the entire page - if(typeof opts.display_element == 'undefined'){ - // check if there is a body element on the page - var body = document.querySelector('body'); - if (body === null) { - document.documentElement.appendChild(document.createElement('body')); - } - // using the full page, so we need the HTML element to - // have 100% height, and body to be full width and height with - // no margin - document.querySelector('html').style.height = '100%'; - document.querySelector('body').style.margin = '0px'; - document.querySelector('body').style.height = '100%'; - document.querySelector('body').style.width = '100%'; - opts.display_element = document.querySelector('body'); - } else { - // make sure that the display element exists on the page - var display; - if (opts.display_element instanceof Element) { - var display = opts.display_element; - } else { - var display = document.querySelector('#' + opts.display_element); - } - if(display === null) { - console.error('The display_element specified in jsPsych.init() does not exist in the DOM.'); + // detect whether page is running in browser as a local file, and if so, disable web audio and video preloading to prevent CORS issues + if (window.location.protocol == 'file:' && (options.override_safe_mode === false || typeof options.override_safe_mode == 'undefined')) { + options.use_webaudio = false; + file_protocol = true; + console.warn("jsPsych detected that it is running via the file:// protocol and not on a web server. "+ + "To prevent issues with cross-origin requests, Web Audio and video preloading have been disabled. "+ + "If you would like to override this setting, you can set 'override_safe_mode' to 'true' in jsPsych.init. "+ + "For more information, see: https://www.jspsych.org/overview/running-experiments"); + } + + // override default options if user specifies an option + opts = Object.assign({}, defaults, options); + + // set DOM element where jsPsych will render content + // if undefined, then jsPsych will use the tag and the entire page + if(typeof opts.display_element == 'undefined'){ + // check if there is a body element on the page + var body = document.querySelector('body'); + if (body === null) { + document.documentElement.appendChild(document.createElement('body')); + } + // using the full page, so we need the HTML element to + // have 100% height, and body to be full width and height with + // no margin + document.querySelector('html').style.height = '100%'; + document.querySelector('body').style.margin = '0px'; + document.querySelector('body').style.height = '100%'; + document.querySelector('body').style.width = '100%'; + opts.display_element = document.querySelector('body'); } else { - opts.display_element = display; + // make sure that the display element exists on the page + var display; + if (opts.display_element instanceof Element) { + var display = opts.display_element; + } else { + var display = document.querySelector('#' + opts.display_element); + } + if(display === null) { + console.error('The display_element specified in jsPsych.init() does not exist in the DOM.'); + } else { + opts.display_element = display; + } } - } - opts.display_element.innerHTML = '
'; - DOM_container = opts.display_element; - DOM_target = document.querySelector('#jspsych-content'); + opts.display_element.innerHTML = '
'; + DOM_container = opts.display_element; + DOM_target = document.querySelector('#jspsych-content'); - // add tabIndex attribute to scope event listeners - opts.display_element.tabIndex = 0; + // add tabIndex attribute to scope event listeners + opts.display_element.tabIndex = 0; - // add CSS class to DOM_target - if(opts.display_element.className.indexOf('jspsych-display-element') == -1){ - opts.display_element.className += ' jspsych-display-element'; - } - DOM_target.className += 'jspsych-content'; + // add CSS class to DOM_target + if(opts.display_element.className.indexOf('jspsych-display-element') == -1){ + opts.display_element.className += ' jspsych-display-element'; + } + DOM_target.className += 'jspsych-content'; - // set experiment_width if not null - if(opts.experiment_width !== null){ - DOM_target.style.width = opts.experiment_width + "px"; - } + // set experiment_width if not null + if(opts.experiment_width !== null){ + DOM_target.style.width = opts.experiment_width + "px"; + } - // create experiment timeline - timeline = new TimelineNode({ - timeline: opts.timeline - }); + // create experiment timeline + timeline = new TimelineNode({ + timeline: opts.timeline + }); - // initialize audio context based on options and browser capabilities - jsPsych.pluginAPI.initAudio(); - - // below code resets event listeners that may have lingered from - // a previous incomplete experiment loaded in same DOM. - jsPsych.pluginAPI.reset(opts.display_element); - // create keyboard event listeners - jsPsych.pluginAPI.createKeyboardEventListeners(opts.display_element); - // create listeners for user browser interaction - jsPsych.data.createInteractionListeners(); - - // add event for closing window - window.addEventListener('beforeunload', opts.on_close); - - // check exclusions before continuing - checkExclusions(opts.exclusions, - function(){ - // success! user can continue... - // start experiment, with or without preloading - if(opts.auto_preload){ - jsPsych.pluginAPI.autoPreload(timeline, startExperiment, opts.preload_images, opts.preload_audio, opts.preload_video, opts.show_preload_progress_bar); - if(opts.max_load_time > 0){ - setTimeout(function(){ - if(!loaded && !loadfail){ - core.loadFail(); - } - }, opts.max_load_time); - } - } else { - startExperiment(); + // initialize audio context based on options and browser capabilities + jsPsych.pluginAPI.initAudio(); + + // below code resets event listeners that may have lingered from + // a previous incomplete experiment loaded in same DOM. + jsPsych.pluginAPI.reset(opts.display_element); + // create keyboard event listeners + jsPsych.pluginAPI.createKeyboardEventListeners(opts.display_element); + // create listeners for user browser interaction + jsPsych.data.createInteractionListeners(); + + // add event for closing window + window.addEventListener('beforeunload', opts.on_close); + + // check exclusions before continuing + checkExclusions(opts.exclusions, + function(){ + // success! user can continue... + // start experiment + loadExtensions(); + }, + function(){ + // fail. incompatible user. } - }, - function(){ - // fail. incompatible user. + ); + function loadExtensions() { + // run the .initialize method of any extensions that are in use + // these should return a Promise to indicate when loading is complete + if (opts.extensions.length == 0) { + startExperiment(); + } else { + var loaded_extensions = 0; + for (var i = 0; i < opts.extensions.length; i++) { + var ext_params = opts.extensions[i].params; + if (!ext_params) { + ext_params = {} + } + jsPsych.extensions[opts.extensions[i].type].initialize(ext_params) + .then(() => { + loaded_extensions++; + if (loaded_extensions == opts.extensions.length) { + startExperiment(); + } + }) + .catch((error_message) => { + console.error(error_message); + }) + } + } } - ); - }; + + }; + + // execute init() when the document is ready + if (document.readyState === "complete") { + init(); + } else { + window.addEventListener("load", init); + } + } core.progress = function() { @@ -235,6 +267,11 @@ window.jsPsych = (function() { if(current_trial_finished){ return; } current_trial_finished = true; + // remove any CSS classes that were added to the DOM via css_classes parameter + if(typeof current_trial.css_classes !== 'undefined' && Array.isArray(current_trial.css_classes)){ + DOM_target.classList.remove(...current_trial.css_classes); + } + // write the data from the trial data = typeof data == 'undefined' ? {} : data; jsPsych.data.write(data); @@ -246,6 +283,38 @@ window.jsPsych = (function() { // of the DataCollection, for easy access and editing. var trial_data_values = trial_data.values()[0]; + if(typeof current_trial.save_trial_parameters == 'object'){ + var keys = Object.keys(current_trial.save_trial_parameters); + for(var i=0; i 0) { @@ -299,23 +371,21 @@ window.jsPsych = (function() { return timeline.activeID(); }; - core.timelineVariable = function(varname, execute){ - if(execute){ + core.timelineVariable = function(varname, immediate){ + if(typeof immediate == 'undefined'){ immediate = false; } + if(jsPsych.internal.call_immediate || immediate === true){ return timeline.timelineVariable(varname); } else { return function() { return timeline.timelineVariable(varname); } } } + core.allTimelineVariables = function(){ + return timeline.allTimelineVariables(); + } + core.addNodeToEndOfTimeline = function(new_timeline, preload_callback){ timeline.insert(new_timeline); - if(typeof preload_callback !== 'undefinded'){ - if(opts.auto_preload){ - jsPsych.pluginAPI.autoPreload(timeline, preload_callback); - } else { - preload_callback(); - } - } } core.pauseExperiment = function(){ @@ -336,6 +406,10 @@ window.jsPsych = (function() { DOM_target.innerHTML = message; } + core.getSafeModeStatus = function() { + return file_protocol; + } + function TimelineNode(parameters, parent, relativeID) { // a unique ID for this node, relative to the parent @@ -449,7 +523,7 @@ window.jsPsych = (function() { // update the current trial node to be completed // returns true if the node is complete after advance (all subnodes are also complete) // returns false otherwise - this.advance = function() { + this.advance = function () { // first check to see if done if (progress.done) { @@ -459,27 +533,32 @@ window.jsPsych = (function() { // if node has not started yet (progress.current_location == -1), // then try to start the node. if (progress.current_location == -1) { - // check for conditonal function on nodes with timelines - if (typeof timeline_parameters != 'undefined') { - if (typeof timeline_parameters.conditional_function !== 'undefined') { + // check for on_timeline_start and conditonal function on nodes with timelines + if (typeof timeline_parameters !== 'undefined') { + // only run the conditional function if this is the first repetition of the timeline when + // repetitions > 1, and only when on the first variable set + if (typeof timeline_parameters.conditional_function !== 'undefined' && progress.current_repetition == 0 && progress.current_variable_set == 0) { + jsPsych.internal.call_immediate = true; var conditional_result = timeline_parameters.conditional_function(); + jsPsych.internal.call_immediate = false; // if the conditional_function() returns false, then the timeline // doesn't run and is marked as complete. if (conditional_result == false) { progress.done = true; return true; } - // if the conditonal_function() returns true, then the node can start - else { - progress.current_location = 0; - } } - // if there is no conditional_function, then the node can start - else { - progress.current_location = 0; + + // if we reach this point then the node has its own timeline and will start + // so we need to check if there is an on_timeline_start function if we are on the first variable set + if (typeof timeline_parameters.on_timeline_start !== 'undefined' && progress.current_variable_set == 0) { + timeline_parameters.on_timeline_start(); } + + } - // if the node does not have a timeline, then it can start + // if we reach this point, then either the node doesn't have a timeline of the + // conditional function returned true and it can start progress.current_location = 0; // call advance again on this node now that it is pointing to a new location return this.advance(); @@ -504,6 +583,7 @@ window.jsPsych = (function() { } // if we've reached the end of the timeline (which, if the code is here, we have) + // there are a few steps to see what to do next... // first, check the timeline_variables to see if we need to loop through again @@ -518,26 +598,41 @@ window.jsPsych = (function() { // if we're all done with the timeline_variables, then check to see if there are more repetitions else if (progress.current_repetition < timeline_parameters.repetitions - 1) { this.nextRepetiton(); + // check to see if there is an on_timeline_finish function + if (typeof timeline_parameters.on_timeline_finish !== 'undefined') { + timeline_parameters.on_timeline_finish(); + } return this.advance(); } - // if we're all done with the repetitions, check if there is a loop function. - else if (typeof timeline_parameters.loop_function !== 'undefined') { - if (timeline_parameters.loop_function(this.generatedData())) { - this.reset(); - return parent_node.advance(); - } else { - progress.done = true; - return true; - } - } - // no more loops on this timeline, we're done! + // if we're all done with the repetitions... else { - progress.done = true; - return true; + // check to see if there is an on_timeline_finish function + if (typeof timeline_parameters.on_timeline_finish !== 'undefined') { + timeline_parameters.on_timeline_finish(); + } + + // if we're all done with the repetitions, check if there is a loop function. + if (typeof timeline_parameters.loop_function !== 'undefined') { + jsPsych.internal.call_immediate = true; + if (timeline_parameters.loop_function(this.generatedData())) { + this.reset(); + jsPsych.internal.call_immediate = false; + return parent_node.advance(); + } else { + progress.done = true; + jsPsych.internal.call_immediate = false; + return true; + } + } + + } + // no more loops on this timeline, we're done! + progress.done = true; + return true; } } @@ -577,8 +672,52 @@ window.jsPsych = (function() { // if progress.current_location is -1, then the timeline variable is being evaluated // in a function that runs prior to the trial starting, so we should treat that trial // as being the active trial for purposes of finding the value of the timeline variable - var loc = Math.max(0, progress.current_location); - return timeline_parameters.timeline[loc].timelineVariable(variable_name); + var loc = Math.max(0, progress.current_location); + // if loc is greater than the number of elements on this timeline, then the timeline + // variable is being evaluated in a function that runs after the trial on the timeline + // are complete but before advancing to the next (like a loop_function). + // treat the last active trial as the active trial for this purpose. + if(loc == timeline_parameters.timeline.length){ + loc = loc - 1; + } + // now find the variable + return timeline_parameters.timeline[loc].timelineVariable(variable_name); + } + } + + // recursively get all the timeline variables for this trial + this.allTimelineVariables = function(){ + var all_tvs = this.allTimelineVariablesNames(); + var all_tvs_vals = {}; + for(var i=0; i trials.length) n = trials.length; + return DataCollection(trials.slice(0, n)); + } + + /** + * Queries the last n elements in a collection of trials. + * + * @param {number} n A positive integer of elements to return. A value of + * n that is less than 1 will throw an error. + * + * @return {Array} Last n objects of a collection of trials. If fewer than + * n trials are available, the trials.length elements will + * be returned. + * + */ + data_collection.last = function(n) { + if (typeof n == 'undefined') { n = 1 } + if (n < 1) { + throw `You must query with a positive nonzero integer. Please use a + different value for n.`; + } + if (trials.length == 0) return DataCollection([]); + if (n > trials.length) n = trials.length; + return DataCollection(trials.slice(trials.length - n, trials.length)); } data_collection.values = function(){ @@ -1588,6 +1838,9 @@ jsPsych.data = (function() { var line = ''; for (var j = 0; j < columns.length; j++) { var value = (typeof array[i][columns[j]] === 'undefined') ? '' : array[i][columns[j]]; + if(typeof value == 'object') { + value = JSON.stringify(value); + } var valueString = value + ""; line += '"' + valueString.replace(/"/g, '""') + '",'; } @@ -1982,10 +2235,10 @@ jsPsych.pluginAPI = (function() { for(var i=0; i= 2.1.2 < 3" - } - }, - "ignore-walk": { - "version": "3.0.1", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "minimatch": "^3.0.4" - } - }, - "inflight": { - "version": "1.0.6", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "once": "^1.3.0", - "wrappy": "1" - } - }, - "inherits": { - "version": "2.0.3", - "bundled": true, - "dev": true - }, - "ini": { - "version": "1.3.5", - "bundled": true, - "dev": true, - "optional": true - }, - "is-fullwidth-code-point": { - "version": "1.0.0", - "bundled": true, - "dev": true, - "requires": { - "number-is-nan": "^1.0.0" - } - }, - "isarray": { - "version": "1.0.0", - "bundled": true, - "dev": true, - "optional": true - }, - "minimatch": { - "version": "3.0.4", - "bundled": true, - "dev": true, - "requires": { - "brace-expansion": "^1.1.7" - } - }, - "minimist": { - "version": "0.0.8", - "bundled": true, - "dev": true - }, - "minipass": { - "version": "2.3.5", - "bundled": true, - "dev": true, - "requires": { - "safe-buffer": "^5.1.2", - "yallist": "^3.0.0" - } - }, - "minizlib": { - "version": "1.2.1", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "minipass": "^2.2.1" - } - }, - "mkdirp": { - "version": "0.5.1", - "bundled": true, - "dev": true, - "requires": { - "minimist": "0.0.8" - } - }, - "ms": { - "version": "2.1.1", - "bundled": true, - "dev": true, - "optional": true - }, - "needle": { - "version": "2.3.0", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "debug": "^4.1.0", - "iconv-lite": "^0.4.4", - "sax": "^1.2.4" - } - }, - "node-pre-gyp": { - "version": "0.12.0", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "detect-libc": "^1.0.2", - "mkdirp": "^0.5.1", - "needle": "^2.2.1", - "nopt": "^4.0.1", - "npm-packlist": "^1.1.6", - "npmlog": "^4.0.2", - "rc": "^1.2.7", - "rimraf": "^2.6.1", - "semver": "^5.3.0", - "tar": "^4" - } - }, - "nopt": { - "version": "4.0.1", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "abbrev": "1", - "osenv": "^0.1.4" - } - }, - "npm-bundled": { - "version": "1.0.6", - "bundled": true, - "dev": true, - "optional": true - }, - "npm-packlist": { - "version": "1.4.1", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "ignore-walk": "^3.0.1", - "npm-bundled": "^1.0.1" - } - }, - "npmlog": { - "version": "4.1.2", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "are-we-there-yet": "~1.1.2", - "console-control-strings": "~1.1.0", - "gauge": "~2.7.3", - "set-blocking": "~2.0.0" - } - }, - "number-is-nan": { - "version": "1.0.1", - "bundled": true, - "dev": true - }, - "object-assign": { - "version": "4.1.1", - "bundled": true, - "dev": true, - "optional": true - }, - "once": { - "version": "1.4.0", - "bundled": true, - "dev": true, - "requires": { - "wrappy": "1" - } - }, - "os-homedir": { - "version": "1.0.2", - "bundled": true, - "dev": true, - "optional": true - }, - "os-tmpdir": { - "version": "1.0.2", - "bundled": true, - "dev": true, - "optional": true - }, - "osenv": { - "version": "0.1.5", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "os-homedir": "^1.0.0", - "os-tmpdir": "^1.0.0" - } - }, - "path-is-absolute": { - "version": "1.0.1", - "bundled": true, - "dev": true, - "optional": true - }, - "process-nextick-args": { - "version": "2.0.0", - "bundled": true, - "dev": true, - "optional": true - }, - "rc": { - "version": "1.2.8", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "deep-extend": "^0.6.0", - "ini": "~1.3.0", - "minimist": "^1.2.0", - "strip-json-comments": "~2.0.1" - }, - "dependencies": { - "minimist": { - "version": "1.2.0", - "bundled": true, - "dev": true, - "optional": true - } - } - }, - "readable-stream": { - "version": "2.3.6", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "core-util-is": "~1.0.0", - "inherits": "~2.0.3", - "isarray": "~1.0.0", - "process-nextick-args": "~2.0.0", - "safe-buffer": "~5.1.1", - "string_decoder": "~1.1.1", - "util-deprecate": "~1.0.1" - } - }, - "rimraf": { - "version": "2.6.3", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "glob": "^7.1.3" - } - }, - "safe-buffer": { - "version": "5.1.2", - "bundled": true, - "dev": true - }, - "safer-buffer": { - "version": "2.1.2", - "bundled": true, - "dev": true, - "optional": true - }, - "sax": { - "version": "1.2.4", - "bundled": true, - "dev": true, - "optional": true - }, - "semver": { - "version": "5.7.0", - "bundled": true, - "dev": true, - "optional": true - }, - "set-blocking": { - "version": "2.0.0", - "bundled": true, - "dev": true, - "optional": true - }, - "signal-exit": { - "version": "3.0.2", - "bundled": true, - "dev": true, - "optional": true - }, - "string-width": { - "version": "1.0.2", - "bundled": true, - "dev": true, - "requires": { - "code-point-at": "^1.0.0", - "is-fullwidth-code-point": "^1.0.0", - "strip-ansi": "^3.0.0" - } - }, - "string_decoder": { - "version": "1.1.1", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "safe-buffer": "~5.1.0" - } - }, - "strip-ansi": { - "version": "3.0.1", - "bundled": true, - "dev": true, - "requires": { - "ansi-regex": "^2.0.0" - } - }, - "strip-json-comments": { - "version": "2.0.1", - "bundled": true, - "dev": true, - "optional": true - }, - "tar": { - "version": "4.4.8", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "chownr": "^1.1.1", - "fs-minipass": "^1.2.5", - "minipass": "^2.3.4", - "minizlib": "^1.1.1", - "mkdirp": "^0.5.0", - "safe-buffer": "^5.1.2", - "yallist": "^3.0.2" - } - }, - "util-deprecate": { - "version": "1.0.2", - "bundled": true, - "dev": true, - "optional": true - }, - "wide-align": { - "version": "1.1.3", - "bundled": true, - "dev": true, - "optional": true, - "requires": { - "string-width": "^1.0.2 || 2" - } - }, - "wrappy": { - "version": "1.0.2", - "bundled": true, - "dev": true - }, - "yallist": { - "version": "3.0.3", - "bundled": true, - "dev": true - } - } + "optional": true }, "function-bind": { "version": "1.1.1", @@ -2078,10 +1799,22 @@ "integrity": "sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==", "dev": true }, + "gensync": { + "version": "1.0.0-beta.1", + "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.1.tgz", + "integrity": "sha512-r8EC6NO1sngH/zdD9fiRDLdcgnbayXah+mLgManTaIZJqEC1MZstmnox8KpnI2/fxQwrp5OpCOYWLp4rBl4Jcg==", + "dev": true + }, "get-caller-file": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-1.0.3.tgz", - "integrity": "sha512-3t6rVToeoZfYSGd8YoLFR2DJkiQrIiUrGcjvFX2mDw3bn6k2OtwHN0TNCLbBO+w8qTvimhDkv+LSscbJY1vE6w==", + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "dev": true + }, + "get-package-type": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/get-package-type/-/get-package-type-0.1.0.tgz", + "integrity": "sha512-pjzuKtY64GYfWizNAJ0fr9VqttZkNiK2iS430LtIHzjBEr6bX8Am2zm4sW4Ro5wjWW5cAlRL1qAMTcXbjNAO2Q==", "dev": true }, "get-stream": { @@ -2109,9 +1842,9 @@ } }, "glob": { - "version": "7.1.4", - "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.4.tgz", - "integrity": "sha512-hkLPepehmnKk41pUGm3sYxoFs/umurYfYJCerbXEyFIWcAzvpipAgVkBqqT9RBKMGjnq6kMuyYwha6csxbiM1A==", + "version": "7.1.6", + "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.6.tgz", + "integrity": "sha512-LwaxwyZ72Lk7vZINtNNrywX0ZuLyStrdDtabefZKAY5ZGJhVtgdznluResxNmPitE0SAO+O26sWTHeKSI2wMBA==", "dev": true, "requires": { "fs.realpath": "^1.0.0", @@ -2129,28 +1862,17 @@ "dev": true }, "graceful-fs": { - "version": "4.2.0", - "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.0.tgz", - "integrity": "sha512-jpSvDPV4Cq/bgtpndIWbI5hmYxhQGHPC4d4cqBPb4DLniCfhJokdXhwhaDuLBGLQdvvRum/UiX6ECVIPvDXqdg==", + "version": "4.2.4", + "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz", + "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw==", "dev": true }, "growly": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/growly/-/growly-1.3.0.tgz", "integrity": "sha1-8QdIy+dq+WS3yWyTxrzCivEgwIE=", - "dev": true - }, - "handlebars": { - "version": "4.1.2", - "resolved": "https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz", - "integrity": "sha512-nvfrjqvt9xQ8Z/w0ijewdD/vvWDTOweBUm96NTr66Wfvo1mJenBLwcYmPs3TIBP5ruzYGD7Hx/DaM9RmhroGPw==", "dev": true, - "requires": { - "neo-async": "^2.6.0", - "optimist": "^0.6.1", - "source-map": "^0.6.1", - "uglify-js": "^3.1.4" - } + "optional": true }, "har-schema": { "version": "2.0.0", @@ -2159,12 +1881,12 @@ "dev": true }, "har-validator": { - "version": "5.1.3", - "resolved": "https://registry.npmjs.org/har-validator/-/har-validator-5.1.3.tgz", - "integrity": "sha512-sNvOCzEQNr/qrvJgc3UG/kD4QtlHycrzwS+6mfTrrSq97BvaYcPZZI1ZSqGSPR73Cxn4LKTD4PttRwfU7jWq5g==", + "version": "5.1.5", + "resolved": "https://registry.npmjs.org/har-validator/-/har-validator-5.1.5.tgz", + "integrity": "sha512-nmT2T0lljbxdQZfspsno9hgrG3Uir6Ks5afism62poxqBM6sDnMEuPmzTq8XN0OEwqKLLdh1jQI3qyE66Nzb3w==", "dev": true, "requires": { - "ajv": "^6.5.5", + "ajv": "^6.12.3", "har-schema": "^2.0.0" } }, @@ -2178,15 +1900,9 @@ } }, "has-flag": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", - "integrity": "sha1-tdRU3CGZriJWmfNGfloH87lVuv0=", - "dev": true - }, - "has-symbols": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.0.tgz", - "integrity": "sha1-uhqPGvKg/DllD1yFA2dwQSIGO0Q=", + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", "dev": true }, "has-value": { @@ -2210,6 +1926,26 @@ "kind-of": "^4.0.0" }, "dependencies": { + "is-number": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-3.0.0.tgz", + "integrity": "sha1-JP1iAaR4LPUFYcgQJ2r8fRLXEZU=", + "dev": true, + "requires": { + "kind-of": "^3.0.2" + }, + "dependencies": { + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, "kind-of": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-4.0.0.tgz", @@ -2222,20 +1958,26 @@ } }, "hosted-git-info": { - "version": "2.7.1", - "resolved": "https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.7.1.tgz", - "integrity": "sha512-7T/BxH19zbcCTa8XkMlbK5lTo1WtgkFi3GvdWEyNuc4Vex7/9Dqbnpsf4JMydcfj9HCg4zUWFTL3Za6lapg5/w==", + "version": "2.8.8", + "resolved": "https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-2.8.8.tgz", + "integrity": "sha512-f/wzC2QaWBs7t9IYqB4T3sR1xviIViXJRJTWBlx2Gf3g0Xi5vI7Yy4koXQ1c9OYDGHN9sBy1DQ2AB8fqZBWhUg==", "dev": true }, "html-encoding-sniffer": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/html-encoding-sniffer/-/html-encoding-sniffer-1.0.2.tgz", - "integrity": "sha512-71lZziiDnsuabfdYiUeWdCVyKuqwWi23L8YeIgV9jSSZHCtb6wB1BKWooH7L3tn4/FuZJMVWyNaIDr4RGmaSYw==", + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/html-encoding-sniffer/-/html-encoding-sniffer-2.0.1.tgz", + "integrity": "sha512-D5JbOMBIR/TVZkubHT+OyT2705QvogUW4IBn6nHd756OwieSF9aDYFj4dv6HHEVGYbHaLETa3WggZYWWMyy3ZQ==", "dev": true, "requires": { - "whatwg-encoding": "^1.0.1" + "whatwg-encoding": "^1.0.5" } }, + "html-escaper": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/html-escaper/-/html-escaper-2.0.2.tgz", + "integrity": "sha512-H2iMtd0I4Mt5eYiapRdIDjp+XzelXQ0tFE4JS7YFwFevXXMmOp9myNrUvCg0D6ws8iqkRPBfKHgbwig1SmlLfg==", + "dev": true + }, "http-signature": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/http-signature/-/http-signature-1.2.0.tgz", @@ -2247,6 +1989,12 @@ "sshpk": "^1.7.0" } }, + "human-signals": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-1.1.1.tgz", + "integrity": "sha512-SEQu7vl8KjNL2eoGBLF3+wAjpsNfA9XMlXAYj/3EdaNfAlxKthD1xjEQfGOUhllCGGJVNY34bRr6lPINhNjyZw==", + "dev": true + }, "iconv-lite": { "version": "0.4.24", "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", @@ -2257,13 +2005,13 @@ } }, "import-local": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/import-local/-/import-local-2.0.0.tgz", - "integrity": "sha512-b6s04m3O+s3CGSbqDIyP4R6aAwAeYlVq9+WUWep6iHa8ETRf9yei1U48C5MmfJmV9AiLYYBKPMq/W+/WRpQmCQ==", + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/import-local/-/import-local-3.0.2.tgz", + "integrity": "sha512-vjL3+w0oulAVZ0hBHnxa/Nm5TAurf9YLQJDhqRZyqb+VKGOB6LU8t9H1Nr5CIo16vh9XfJTOoHwU0B71S557gA==", "dev": true, "requires": { - "pkg-dir": "^3.0.0", - "resolve-cwd": "^2.0.0" + "pkg-dir": "^4.2.0", + "resolve-cwd": "^3.0.0" } }, "imurmurhash": { @@ -2288,19 +2036,10 @@ "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", "dev": true }, - "invariant": { - "version": "2.2.4", - "resolved": "https://registry.npmjs.org/invariant/-/invariant-2.2.4.tgz", - "integrity": "sha512-phJfQVBuaJM5raOpJjSfkiD6BpbCE4Ns//LaXl6wGYtUBY83nWS6Rf9tXm2e8VaK60JEjYldbPif/A2B1C2gNA==", - "dev": true, - "requires": { - "loose-envify": "^1.0.0" - } - }, - "invert-kv": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/invert-kv/-/invert-kv-2.0.0.tgz", - "integrity": "sha512-wPVv/y/QQ/Uiirj/vh3oP+1Ww+AWehmi1g5fFWGPF6IpCBCDVrhgHRMvrLfdYcwDh3QJbGXDW4JAuzxElLSqKA==", + "ip-regex": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/ip-regex/-/ip-regex-2.1.0.tgz", + "integrity": "sha1-+ni/XS5pE8kRzp+BnuUUa7bYROk=", "dev": true }, "is-accessor-descriptor": { @@ -2335,12 +2074,6 @@ "integrity": "sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w==", "dev": true }, - "is-callable": { - "version": "1.1.4", - "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.1.4.tgz", - "integrity": "sha512-r5p9sxJjYnArLjObpjA4xu5EKI3CuKHkJXMhT7kwbpUyIFD1n5PMAsoPvWnvtZiNz7LjkYDRZhd7FlI0eMijEA==", - "dev": true - }, "is-ci": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/is-ci/-/is-ci-2.0.0.tgz", @@ -2350,6 +2083,15 @@ "ci-info": "^2.0.0" } }, + "is-core-module": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.0.0.tgz", + "integrity": "sha512-jq1AH6C8MuteOoBPwkxHafmByhL9j5q4OaPGdbuD+ZtQJVzH+i6E3BJDQcBA09k57i2Hh2yQbEG8yObZ0jdlWw==", + "dev": true, + "requires": { + "has": "^1.0.3" + } + }, "is-data-descriptor": { "version": "0.1.4", "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-0.1.4.tgz", @@ -2370,12 +2112,6 @@ } } }, - "is-date-object": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/is-date-object/-/is-date-object-1.0.1.tgz", - "integrity": "sha1-mqIOtq7rv/d/vTPnTKAbM1gdOhY=", - "dev": true - }, "is-descriptor": { "version": "0.1.6", "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-0.1.6.tgz", @@ -2395,6 +2131,13 @@ } } }, + "is-docker": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-2.1.1.tgz", + "integrity": "sha512-ZOoqiXfEwtGknTiuDEy8pN2CfE3TxMHprvNer1mXiqwkOT77Rw3YVrUQ52EqAOU3QAWDQ+bQdx7HJzrv7LS2Hw==", + "dev": true, + "optional": true + }, "is-extendable": { "version": "0.1.1", "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-0.1.1.tgz", @@ -2402,9 +2145,9 @@ "dev": true }, "is-fullwidth-code-point": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-2.0.0.tgz", - "integrity": "sha1-o7MKXE8ZkYMWeqq5O+764937ZU8=", + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", "dev": true }, "is-generator-fn": { @@ -2414,24 +2157,10 @@ "dev": true }, "is-number": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/is-number/-/is-number-3.0.0.tgz", - "integrity": "sha1-JP1iAaR4LPUFYcgQJ2r8fRLXEZU=", - "dev": true, - "requires": { - "kind-of": "^3.0.2" - }, - "dependencies": { - "kind-of": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", - "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", + "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", + "dev": true }, "is-plain-object": { "version": "2.0.4", @@ -2442,14 +2171,11 @@ "isobject": "^3.0.1" } }, - "is-regex": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.0.4.tgz", - "integrity": "sha1-VRdIm1RwkbCTDglWVM7SXul+lJE=", - "dev": true, - "requires": { - "has": "^1.0.1" - } + "is-potential-custom-element-name": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-potential-custom-element-name/-/is-potential-custom-element-name-1.0.0.tgz", + "integrity": "sha1-DFLlS8yjkbssSUsh6GJtczbG45c=", + "dev": true }, "is-stream": { "version": "1.1.0", @@ -2457,15 +2183,6 @@ "integrity": "sha1-EtSj3U5o4Lec6428hBc66A2RykQ=", "dev": true }, - "is-symbol": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/is-symbol/-/is-symbol-1.0.2.tgz", - "integrity": "sha512-HS8bZ9ox60yCJLH9snBpIwv9pYUAkcuLhSA1oero1UB5y9aiQpRA8y2ex945AOtCZL1lJDeIk3G5LthswI46Lw==", - "dev": true, - "requires": { - "has-symbols": "^1.0.0" - } - }, "is-typedarray": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/is-typedarray/-/is-typedarray-1.0.0.tgz", @@ -2479,10 +2196,14 @@ "dev": true }, "is-wsl": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-1.1.0.tgz", - "integrity": "sha1-HxbkqiKwTRM2tmGIpmrzxgDDpm0=", - "dev": true + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-2.2.0.tgz", + "integrity": "sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==", + "dev": true, + "optional": true, + "requires": { + "is-docker": "^2.0.0" + } }, "isarray": { "version": "1.0.0", @@ -2509,509 +2230,586 @@ "dev": true }, "istanbul-lib-coverage": { - "version": "2.0.5", - "resolved": "https://registry.npmjs.org/istanbul-lib-coverage/-/istanbul-lib-coverage-2.0.5.tgz", - "integrity": "sha512-8aXznuEPCJvGnMSRft4udDRDtb1V3pkQkMMI5LI+6HuQz5oQ4J2UFn1H82raA3qJtyOLkkwVqICBQkjnGtn5mA==", + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/istanbul-lib-coverage/-/istanbul-lib-coverage-3.0.0.tgz", + "integrity": "sha512-UiUIqxMgRDET6eR+o5HbfRYP1l0hqkWOs7vNxC/mggutCMUIhWMm8gAHb8tHlyfD3/l6rlgNA5cKdDzEAf6hEg==", "dev": true }, "istanbul-lib-instrument": { - "version": "3.3.0", - "resolved": "https://registry.npmjs.org/istanbul-lib-instrument/-/istanbul-lib-instrument-3.3.0.tgz", - "integrity": "sha512-5nnIN4vo5xQZHdXno/YDXJ0G+I3dAm4XgzfSVTPLQpj/zAV2dV6Juy0yaf10/zrJOJeHoN3fraFe+XRq2bFVZA==", + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/istanbul-lib-instrument/-/istanbul-lib-instrument-4.0.3.tgz", + "integrity": "sha512-BXgQl9kf4WTCPCCpmFGoJkz/+uhvm7h7PFKUYxh7qarQd3ER33vHG//qaE8eN25l07YqZPpHXU9I09l/RD5aGQ==", "dev": true, "requires": { - "@babel/generator": "^7.4.0", - "@babel/parser": "^7.4.3", - "@babel/template": "^7.4.0", - "@babel/traverse": "^7.4.3", - "@babel/types": "^7.4.0", - "istanbul-lib-coverage": "^2.0.5", - "semver": "^6.0.0" + "@babel/core": "^7.7.5", + "@istanbuljs/schema": "^0.1.2", + "istanbul-lib-coverage": "^3.0.0", + "semver": "^6.3.0" }, "dependencies": { "semver": { - "version": "6.2.0", - "resolved": "https://registry.npmjs.org/semver/-/semver-6.2.0.tgz", - "integrity": "sha512-jdFC1VdUGT/2Scgbimf7FSx9iJLXoqfglSF+gJeuNWVpiE37OIbc1jywR/GJyFdz3mnkz2/id0L0J/cr0izR5A==", + "version": "6.3.0", + "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz", + "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw==", "dev": true } } }, "istanbul-lib-report": { - "version": "2.0.8", - "resolved": "https://registry.npmjs.org/istanbul-lib-report/-/istanbul-lib-report-2.0.8.tgz", - "integrity": "sha512-fHBeG573EIihhAblwgxrSenp0Dby6tJMFR/HvlerBsrCTD5bkUuoNtn3gVh29ZCS824cGGBPn7Sg7cNk+2xUsQ==", + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/istanbul-lib-report/-/istanbul-lib-report-3.0.0.tgz", + "integrity": "sha512-wcdi+uAKzfiGT2abPpKZ0hSU1rGQjUQnLvtY5MpQ7QCTahD3VODhcu4wcfY1YtkGaDD5yuydOLINXsfbus9ROw==", "dev": true, "requires": { - "istanbul-lib-coverage": "^2.0.5", - "make-dir": "^2.1.0", - "supports-color": "^6.1.0" - }, - "dependencies": { - "supports-color": { - "version": "6.1.0", - "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz", - "integrity": "sha512-qe1jfm1Mg7Nq/NSh6XE24gPXROEVsWHxC1LIx//XNlD9iw7YZQGjZNjYN7xGaEG6iKdA8EtNFW6R0gjnVXp+wQ==", - "dev": true, - "requires": { - "has-flag": "^3.0.0" - } - } + "istanbul-lib-coverage": "^3.0.0", + "make-dir": "^3.0.0", + "supports-color": "^7.1.0" } }, "istanbul-lib-source-maps": { - "version": "3.0.6", - "resolved": "https://registry.npmjs.org/istanbul-lib-source-maps/-/istanbul-lib-source-maps-3.0.6.tgz", - "integrity": "sha512-R47KzMtDJH6X4/YW9XTx+jrLnZnscW4VpNN+1PViSYTejLVPWv7oov+Duf8YQSPyVRUvueQqz1TcsC6mooZTXw==", + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/istanbul-lib-source-maps/-/istanbul-lib-source-maps-4.0.0.tgz", + "integrity": "sha512-c16LpFRkR8vQXyHZ5nLpY35JZtzj1PQY1iZmesUbf1FZHbIupcWfjgOXBY9YHkLEQ6puz1u4Dgj6qmU/DisrZg==", "dev": true, "requires": { - "debug": "^4.1.1", - "istanbul-lib-coverage": "^2.0.5", - "make-dir": "^2.1.0", - "rimraf": "^2.6.3", - "source-map": "^0.6.1" - }, - "dependencies": { - "debug": { - "version": "4.1.1", - "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz", - "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==", - "dev": true, - "requires": { - "ms": "^2.1.1" - } - }, - "ms": { - "version": "2.1.2", - "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", - "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", - "dev": true - } + "debug": "^4.1.1", + "istanbul-lib-coverage": "^3.0.0", + "source-map": "^0.6.1" } }, "istanbul-reports": { - "version": "2.2.6", - "resolved": "https://registry.npmjs.org/istanbul-reports/-/istanbul-reports-2.2.6.tgz", - "integrity": "sha512-SKi4rnMyLBKe0Jy2uUdx28h8oG7ph2PPuQPvIAh31d+Ci+lSiEu4C+h3oBPuJ9+mPKhOyW0M8gY4U5NM1WLeXA==", + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/istanbul-reports/-/istanbul-reports-3.0.2.tgz", + "integrity": "sha512-9tZvz7AiR3PEDNGiV9vIouQ/EAcqMXFmkcA1CDFTwOB98OZVDL0PH9glHotf5Ugp6GCOTypfzGWI/OqjWNCRUw==", "dev": true, "requires": { - "handlebars": "^4.1.2" + "html-escaper": "^2.0.0", + "istanbul-lib-report": "^3.0.0" } }, "jest": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest/-/jest-24.8.0.tgz", - "integrity": "sha512-o0HM90RKFRNWmAWvlyV8i5jGZ97pFwkeVoGvPW1EtLTgJc2+jcuqcbbqcSZLE/3f2S5pt0y2ZBETuhpWNl1Reg==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest/-/jest-26.6.0.tgz", + "integrity": "sha512-jxTmrvuecVISvKFFhOkjsWRZV7sFqdSUAd1ajOKY+/QE/aLBVstsJ/dX8GczLzwiT6ZEwwmZqtCUHLHHQVzcfA==", "dev": true, "requires": { - "import-local": "^2.0.0", - "jest-cli": "^24.8.0" + "@jest/core": "^26.6.0", + "import-local": "^3.0.2", + "jest-cli": "^26.6.0" }, "dependencies": { "jest-cli": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-cli/-/jest-cli-24.8.0.tgz", - "integrity": "sha512-+p6J00jSMPQ116ZLlHJJvdf8wbjNbZdeSX9ptfHX06/MSNaXmKihQzx5vQcw0q2G6JsdVkUIdWbOWtSnaYs3yA==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-cli/-/jest-cli-26.6.0.tgz", + "integrity": "sha512-lJAMZGpmML+y3Kfln6L5DGRTfKGQ+n1JDM1RQstojSLUhe/EaXWR8vmcx70v4CyJKvFZs7c/0QDkPX5ra/aDew==", "dev": true, "requires": { - "@jest/core": "^24.8.0", - "@jest/test-result": "^24.8.0", - "@jest/types": "^24.8.0", - "chalk": "^2.0.1", + "@jest/core": "^26.6.0", + "@jest/test-result": "^26.6.0", + "@jest/types": "^26.6.0", + "chalk": "^4.0.0", "exit": "^0.1.2", - "import-local": "^2.0.0", + "graceful-fs": "^4.2.4", + "import-local": "^3.0.2", "is-ci": "^2.0.0", - "jest-config": "^24.8.0", - "jest-util": "^24.8.0", - "jest-validate": "^24.8.0", + "jest-config": "^26.6.0", + "jest-util": "^26.6.0", + "jest-validate": "^26.6.0", "prompts": "^2.0.1", - "realpath-native": "^1.1.0", - "yargs": "^12.0.2" + "yargs": "^15.4.1" } } } }, "jest-changed-files": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-changed-files/-/jest-changed-files-24.8.0.tgz", - "integrity": "sha512-qgANC1Yrivsq+UrLXsvJefBKVoCsKB0Hv+mBb6NMjjZ90wwxCDmU3hsCXBya30cH+LnPYjwgcU65i6yJ5Nfuug==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-changed-files/-/jest-changed-files-26.6.0.tgz", + "integrity": "sha512-k8PZzlp3cRWDe0fDc/pYs+c4w36+hiWXe1PpW/pW1UJmu1TNTAcQfZUrVYleij+uEqlY6z4mPv7Iff3kY0o5SQ==", "dev": true, "requires": { - "@jest/types": "^24.8.0", - "execa": "^1.0.0", - "throat": "^4.0.0" + "@jest/types": "^26.6.0", + "execa": "^4.0.0", + "throat": "^5.0.0" + }, + "dependencies": { + "cross-spawn": { + "version": "7.0.3", + "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz", + "integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==", + "dev": true, + "requires": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + } + }, + "execa": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/execa/-/execa-4.0.3.tgz", + "integrity": "sha512-WFDXGHckXPWZX19t1kCsXzOpqX9LWYNqn4C+HqZlk/V0imTkzJZqf87ZBhvpHaftERYknpk0fjSylnXVlVgI0A==", + "dev": true, + "requires": { + "cross-spawn": "^7.0.0", + "get-stream": "^5.0.0", + "human-signals": "^1.1.1", + "is-stream": "^2.0.0", + "merge-stream": "^2.0.0", + "npm-run-path": "^4.0.0", + "onetime": "^5.1.0", + "signal-exit": "^3.0.2", + "strip-final-newline": "^2.0.0" + } + }, + "get-stream": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-5.2.0.tgz", + "integrity": "sha512-nBF+F1rAZVCu/p7rjzgA+Yb4lfYXrpl7a6VmJrU8wF9I1CKvP/QwPNZHnOlwbTkY6dvtFIzFMSyQXbLoTQPRpA==", + "dev": true, + "requires": { + "pump": "^3.0.0" + } + }, + "is-stream": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.0.tgz", + "integrity": "sha512-XCoy+WlUr7d1+Z8GgSuXmpuUFC9fOhRXglJMx+dwLKTkL44Cjd4W1Z5P+BQZpr+cR93aGP4S/s7Ftw6Nd/kiEw==", + "dev": true + }, + "npm-run-path": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-4.0.1.tgz", + "integrity": "sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==", + "dev": true, + "requires": { + "path-key": "^3.0.0" + } + }, + "path-key": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz", + "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", + "dev": true + }, + "shebang-command": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz", + "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", + "dev": true, + "requires": { + "shebang-regex": "^3.0.0" + } + }, + "shebang-regex": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz", + "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", + "dev": true + }, + "which": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "dev": true, + "requires": { + "isexe": "^2.0.0" + } + } } }, "jest-config": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-config/-/jest-config-24.8.0.tgz", - "integrity": "sha512-Czl3Nn2uEzVGsOeaewGWoDPD8GStxCpAe0zOYs2x2l0fZAgPbCr3uwUkgNKV3LwE13VXythM946cd5rdGkkBZw==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-config/-/jest-config-26.6.0.tgz", + "integrity": "sha512-RCR1Kf7MGJ5waVCvrj/k3nCAJKquWZlzs8rkskzj0KlG392hNBOaYd5FQ4cCac08j6pwfIDOwNvMcy0/FqguJg==", "dev": true, "requires": { "@babel/core": "^7.1.0", - "@jest/test-sequencer": "^24.8.0", - "@jest/types": "^24.8.0", - "babel-jest": "^24.8.0", - "chalk": "^2.0.1", + "@jest/test-sequencer": "^26.6.0", + "@jest/types": "^26.6.0", + "babel-jest": "^26.6.0", + "chalk": "^4.0.0", + "deepmerge": "^4.2.2", "glob": "^7.1.1", - "jest-environment-jsdom": "^24.8.0", - "jest-environment-node": "^24.8.0", - "jest-get-type": "^24.8.0", - "jest-jasmine2": "^24.8.0", - "jest-regex-util": "^24.3.0", - "jest-resolve": "^24.8.0", - "jest-util": "^24.8.0", - "jest-validate": "^24.8.0", - "micromatch": "^3.1.10", - "pretty-format": "^24.8.0", - "realpath-native": "^1.1.0" + "graceful-fs": "^4.2.4", + "jest-environment-jsdom": "^26.6.0", + "jest-environment-node": "^26.6.0", + "jest-get-type": "^26.3.0", + "jest-jasmine2": "^26.6.0", + "jest-regex-util": "^26.0.0", + "jest-resolve": "^26.6.0", + "jest-util": "^26.6.0", + "jest-validate": "^26.6.0", + "micromatch": "^4.0.2", + "pretty-format": "^26.6.0" } }, "jest-diff": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-diff/-/jest-diff-24.8.0.tgz", - "integrity": "sha512-wxetCEl49zUpJ/bvUmIFjd/o52J+yWcoc5ZyPq4/W1LUKGEhRYDIbP1KcF6t+PvqNrGAFk4/JhtxDq/Nnzs66g==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-diff/-/jest-diff-26.6.0.tgz", + "integrity": "sha512-IH09rKsdWY8YEY7ii2BHlSq59oXyF2pK3GoK+hOK9eD/x6009eNB5Jv1shLMKgxekodPzLlV7eZP1jPFQYds8w==", "dev": true, "requires": { - "chalk": "^2.0.1", - "diff-sequences": "^24.3.0", - "jest-get-type": "^24.8.0", - "pretty-format": "^24.8.0" + "chalk": "^4.0.0", + "diff-sequences": "^26.5.0", + "jest-get-type": "^26.3.0", + "pretty-format": "^26.6.0" } }, "jest-docblock": { - "version": "24.3.0", - "resolved": "https://registry.npmjs.org/jest-docblock/-/jest-docblock-24.3.0.tgz", - "integrity": "sha512-nlANmF9Yq1dufhFlKG9rasfQlrY7wINJbo3q01tu56Jv5eBU5jirylhF2O5ZBnLxzOVBGRDz/9NAwNyBtG4Nyg==", + "version": "26.0.0", + "resolved": "https://registry.npmjs.org/jest-docblock/-/jest-docblock-26.0.0.tgz", + "integrity": "sha512-RDZ4Iz3QbtRWycd8bUEPxQsTlYazfYn/h5R65Fc6gOfwozFhoImx+affzky/FFBuqISPTqjXomoIGJVKBWoo0w==", "dev": true, "requires": { - "detect-newline": "^2.1.0" + "detect-newline": "^3.0.0" } }, "jest-each": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-each/-/jest-each-24.8.0.tgz", - "integrity": "sha512-NrwK9gaL5+XgrgoCsd9svsoWdVkK4gnvyhcpzd6m487tXHqIdYeykgq3MKI1u4I+5Zf0tofr70at9dWJDeb+BA==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-each/-/jest-each-26.6.0.tgz", + "integrity": "sha512-7LzSNwNviYnm4FWK46itIE03NqD/8O8/7tVQ5rwTdTNrmPMQoQ1Z7hEFQ1uzRReluOFislpurpnQ0QsclSiDkA==", "dev": true, "requires": { - "@jest/types": "^24.8.0", - "chalk": "^2.0.1", - "jest-get-type": "^24.8.0", - "jest-util": "^24.8.0", - "pretty-format": "^24.8.0" + "@jest/types": "^26.6.0", + "chalk": "^4.0.0", + "jest-get-type": "^26.3.0", + "jest-util": "^26.6.0", + "pretty-format": "^26.6.0" } }, "jest-environment-jsdom": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-environment-jsdom/-/jest-environment-jsdom-24.8.0.tgz", - "integrity": "sha512-qbvgLmR7PpwjoFjM/sbuqHJt/NCkviuq9vus9NBn/76hhSidO+Z6Bn9tU8friecegbJL8gzZQEMZBQlFWDCwAQ==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-environment-jsdom/-/jest-environment-jsdom-26.6.0.tgz", + "integrity": "sha512-bXO9IG7a3YlyiHxwfKF+OWoTA+GIw4FrD+Y0pb6CC+nKs5JuSRZmR2ovEX6PWo6KY42ka3JoZOp3KEnXiFPPCg==", "dev": true, "requires": { - "@jest/environment": "^24.8.0", - "@jest/fake-timers": "^24.8.0", - "@jest/types": "^24.8.0", - "jest-mock": "^24.8.0", - "jest-util": "^24.8.0", - "jsdom": "^11.5.1" + "@jest/environment": "^26.6.0", + "@jest/fake-timers": "^26.6.0", + "@jest/types": "^26.6.0", + "@types/node": "*", + "jest-mock": "^26.6.0", + "jest-util": "^26.6.0", + "jsdom": "^16.4.0" } }, "jest-environment-node": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-environment-node/-/jest-environment-node-24.8.0.tgz", - "integrity": "sha512-vIGUEScd1cdDgR6sqn2M08sJTRLQp6Dk/eIkCeO4PFHxZMOgy+uYLPMC4ix3PEfM5Au/x3uQ/5Tl0DpXXZsJ/Q==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-environment-node/-/jest-environment-node-26.6.0.tgz", + "integrity": "sha512-kWU6ZD1h6fs7sIl6ufuK0sXW/3d6WLaj48iow0NxhgU6eY89d9K+0MVmE0cRcVlh53yMyxTK6b+TnhLOnlGp/A==", "dev": true, "requires": { - "@jest/environment": "^24.8.0", - "@jest/fake-timers": "^24.8.0", - "@jest/types": "^24.8.0", - "jest-mock": "^24.8.0", - "jest-util": "^24.8.0" + "@jest/environment": "^26.6.0", + "@jest/fake-timers": "^26.6.0", + "@jest/types": "^26.6.0", + "@types/node": "*", + "jest-mock": "^26.6.0", + "jest-util": "^26.6.0" } }, "jest-get-type": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-get-type/-/jest-get-type-24.8.0.tgz", - "integrity": "sha512-RR4fo8jEmMD9zSz2nLbs2j0zvPpk/KCEz3a62jJWbd2ayNo0cb+KFRxPHVhE4ZmgGJEQp0fosmNz84IfqM8cMQ==", + "version": "26.3.0", + "resolved": "https://registry.npmjs.org/jest-get-type/-/jest-get-type-26.3.0.tgz", + "integrity": "sha512-TpfaviN1R2pQWkIihlfEanwOXK0zcxrKEE4MlU6Tn7keoXdN6/3gK/xl0yEh8DOunn5pOVGKf8hB4R9gVh04ig==", "dev": true }, "jest-haste-map": { - "version": "24.8.1", - "resolved": "https://registry.npmjs.org/jest-haste-map/-/jest-haste-map-24.8.1.tgz", - "integrity": "sha512-SwaxMGVdAZk3ernAx2Uv2sorA7jm3Kx+lR0grp6rMmnY06Kn/urtKx1LPN2mGTea4fCT38impYT28FfcLUhX0g==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-haste-map/-/jest-haste-map-26.6.0.tgz", + "integrity": "sha512-RpNqAGMR58uG9E9vWITorX2/R7he/tSbHWldX5upt1ymEcmCaXczqXxjqI6xOtRR8Ev6ZEYDfgSA5Fy7WHUL5w==", "dev": true, "requires": { - "@jest/types": "^24.8.0", - "anymatch": "^2.0.0", + "@jest/types": "^26.6.0", + "@types/graceful-fs": "^4.1.2", + "@types/node": "*", + "anymatch": "^3.0.3", "fb-watchman": "^2.0.0", - "fsevents": "^1.2.7", - "graceful-fs": "^4.1.15", - "invariant": "^2.2.4", - "jest-serializer": "^24.4.0", - "jest-util": "^24.8.0", - "jest-worker": "^24.6.0", - "micromatch": "^3.1.10", + "fsevents": "^2.1.2", + "graceful-fs": "^4.2.4", + "jest-regex-util": "^26.0.0", + "jest-serializer": "^26.5.0", + "jest-util": "^26.6.0", + "jest-worker": "^26.5.0", + "micromatch": "^4.0.2", "sane": "^4.0.3", "walker": "^1.0.7" } }, "jest-jasmine2": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-jasmine2/-/jest-jasmine2-24.8.0.tgz", - "integrity": "sha512-cEky88npEE5LKd5jPpTdDCLvKkdyklnaRycBXL6GNmpxe41F0WN44+i7lpQKa/hcbXaQ+rc9RMaM4dsebrYong==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-jasmine2/-/jest-jasmine2-26.6.0.tgz", + "integrity": "sha512-2E3c+0A9y2OIK5caw5qlcm3b4doaf8FSfXKTX3xqKTUJoR4zXh0xvERBNWxZP9xMNXEi/2Z3LVsZpR2hROgixA==", "dev": true, "requires": { "@babel/traverse": "^7.1.0", - "@jest/environment": "^24.8.0", - "@jest/test-result": "^24.8.0", - "@jest/types": "^24.8.0", - "chalk": "^2.0.1", + "@jest/environment": "^26.6.0", + "@jest/source-map": "^26.5.0", + "@jest/test-result": "^26.6.0", + "@jest/types": "^26.6.0", + "@types/node": "*", + "chalk": "^4.0.0", "co": "^4.6.0", - "expect": "^24.8.0", + "expect": "^26.6.0", "is-generator-fn": "^2.0.0", - "jest-each": "^24.8.0", - "jest-matcher-utils": "^24.8.0", - "jest-message-util": "^24.8.0", - "jest-runtime": "^24.8.0", - "jest-snapshot": "^24.8.0", - "jest-util": "^24.8.0", - "pretty-format": "^24.8.0", - "throat": "^4.0.0" + "jest-each": "^26.6.0", + "jest-matcher-utils": "^26.6.0", + "jest-message-util": "^26.6.0", + "jest-runtime": "^26.6.0", + "jest-snapshot": "^26.6.0", + "jest-util": "^26.6.0", + "pretty-format": "^26.6.0", + "throat": "^5.0.0" } }, "jest-leak-detector": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-leak-detector/-/jest-leak-detector-24.8.0.tgz", - "integrity": "sha512-cG0yRSK8A831LN8lIHxI3AblB40uhv0z+SsQdW3GoMMVcK+sJwrIIyax5tu3eHHNJ8Fu6IMDpnLda2jhn2pD/g==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-leak-detector/-/jest-leak-detector-26.6.0.tgz", + "integrity": "sha512-3oMv34imWTl1/nwKnmE/DxYo3QqHnZeF3nO6UzldppkhW0Za7OY2DYyWiamqVzwdUrjhoQkY5g+aF6Oc3alYEQ==", "dev": true, "requires": { - "pretty-format": "^24.8.0" + "jest-get-type": "^26.3.0", + "pretty-format": "^26.6.0" } }, "jest-matcher-utils": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-matcher-utils/-/jest-matcher-utils-24.8.0.tgz", - "integrity": "sha512-lex1yASY51FvUuHgm0GOVj7DCYEouWSlIYmCW7APSqB9v8mXmKSn5+sWVF0MhuASG0bnYY106/49JU1FZNl5hw==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-matcher-utils/-/jest-matcher-utils-26.6.0.tgz", + "integrity": "sha512-BUy/dQYb7ELGRazmK4ZVkbfPYCaNnrMtw1YljVhcKzWUxBM0xQ+bffrfnMLdRZp4wUUcT4ahaVnA3VWZtXWP9Q==", "dev": true, "requires": { - "chalk": "^2.0.1", - "jest-diff": "^24.8.0", - "jest-get-type": "^24.8.0", - "pretty-format": "^24.8.0" + "chalk": "^4.0.0", + "jest-diff": "^26.6.0", + "jest-get-type": "^26.3.0", + "pretty-format": "^26.6.0" } }, "jest-message-util": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-message-util/-/jest-message-util-24.8.0.tgz", - "integrity": "sha512-p2k71rf/b6ns8btdB0uVdljWo9h0ovpnEe05ZKWceQGfXYr4KkzgKo3PBi8wdnd9OtNh46VpNIJynUn/3MKm1g==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-message-util/-/jest-message-util-26.6.0.tgz", + "integrity": "sha512-WPAeS38Kza29f04I0iOIQrXeiebRXjmn6cFehzI7KKJOgT0NmqYAcLgjWnIAfKs5FBmEQgje1kXab0DaLKCl2w==", "dev": true, "requires": { "@babel/code-frame": "^7.0.0", - "@jest/test-result": "^24.8.0", - "@jest/types": "^24.8.0", - "@types/stack-utils": "^1.0.1", - "chalk": "^2.0.1", - "micromatch": "^3.1.10", - "slash": "^2.0.0", - "stack-utils": "^1.0.1" + "@jest/types": "^26.6.0", + "@types/stack-utils": "^2.0.0", + "chalk": "^4.0.0", + "graceful-fs": "^4.2.4", + "micromatch": "^4.0.2", + "slash": "^3.0.0", + "stack-utils": "^2.0.2" } }, "jest-mock": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-mock/-/jest-mock-24.8.0.tgz", - "integrity": "sha512-6kWugwjGjJw+ZkK4mDa0Df3sDlUTsV47MSrT0nGQ0RBWJbpODDQ8MHDVtGtUYBne3IwZUhtB7elxHspU79WH3A==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-mock/-/jest-mock-26.6.0.tgz", + "integrity": "sha512-HsNmL8vVIn1rL1GWA21Drpy9Cl+7GImwbWz/0fkWHrUXVzuaG7rP0vwLtE+/n70Mt0U8nPkz8fxioi3SC0wqhw==", "dev": true, "requires": { - "@jest/types": "^24.8.0" + "@jest/types": "^26.6.0", + "@types/node": "*" } }, "jest-pnp-resolver": { - "version": "1.2.1", - "resolved": "https://registry.npmjs.org/jest-pnp-resolver/-/jest-pnp-resolver-1.2.1.tgz", - "integrity": "sha512-pgFw2tm54fzgYvc/OHrnysABEObZCUNFnhjoRjaVOCN8NYc032/gVjPaHD4Aq6ApkSieWtfKAFQtmDKAmhupnQ==", + "version": "1.2.2", + "resolved": "https://registry.npmjs.org/jest-pnp-resolver/-/jest-pnp-resolver-1.2.2.tgz", + "integrity": "sha512-olV41bKSMm8BdnuMsewT4jqlZ8+3TCARAXjZGT9jcoSnrfUnRCqnMoF9XEeoWjbzObpqF9dRhHQj0Xb9QdF6/w==", "dev": true }, "jest-regex-util": { - "version": "24.3.0", - "resolved": "https://registry.npmjs.org/jest-regex-util/-/jest-regex-util-24.3.0.tgz", - "integrity": "sha512-tXQR1NEOyGlfylyEjg1ImtScwMq8Oh3iJbGTjN7p0J23EuVX1MA8rwU69K4sLbCmwzgCUbVkm0FkSF9TdzOhtg==", + "version": "26.0.0", + "resolved": "https://registry.npmjs.org/jest-regex-util/-/jest-regex-util-26.0.0.tgz", + "integrity": "sha512-Gv3ZIs/nA48/Zvjrl34bf+oD76JHiGDUxNOVgUjh3j890sblXryjY4rss71fPtD/njchl6PSE2hIhvyWa1eT0A==", "dev": true }, "jest-resolve": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-resolve/-/jest-resolve-24.8.0.tgz", - "integrity": "sha512-+hjSzi1PoRvnuOICoYd5V/KpIQmkAsfjFO71458hQ2Whi/yf1GDeBOFj8Gxw4LrApHsVJvn5fmjcPdmoUHaVKw==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-resolve/-/jest-resolve-26.6.0.tgz", + "integrity": "sha512-tRAz2bwraHufNp+CCmAD8ciyCpXCs1NQxB5EJAmtCFy6BN81loFEGWKzYu26Y62lAJJe4X4jg36Kf+NsQyiStQ==", "dev": true, "requires": { - "@jest/types": "^24.8.0", - "browser-resolve": "^1.11.3", - "chalk": "^2.0.1", - "jest-pnp-resolver": "^1.2.1", - "realpath-native": "^1.1.0" + "@jest/types": "^26.6.0", + "chalk": "^4.0.0", + "graceful-fs": "^4.2.4", + "jest-pnp-resolver": "^1.2.2", + "jest-util": "^26.6.0", + "read-pkg-up": "^7.0.1", + "resolve": "^1.17.0", + "slash": "^3.0.0" } }, "jest-resolve-dependencies": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-resolve-dependencies/-/jest-resolve-dependencies-24.8.0.tgz", - "integrity": "sha512-hyK1qfIf/krV+fSNyhyJeq3elVMhK9Eijlwy+j5jqmZ9QsxwKBiP6qukQxaHtK8k6zql/KYWwCTQ+fDGTIJauw==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-resolve-dependencies/-/jest-resolve-dependencies-26.6.0.tgz", + "integrity": "sha512-4di+XUT7LwJJ8b8qFEEDQssC5+aeVjLhvRICCaS4alh/EVS9JCT1armfJ3pnSS8t4o6659WbMmKVo82H4LuUVw==", "dev": true, "requires": { - "@jest/types": "^24.8.0", - "jest-regex-util": "^24.3.0", - "jest-snapshot": "^24.8.0" + "@jest/types": "^26.6.0", + "jest-regex-util": "^26.0.0", + "jest-snapshot": "^26.6.0" } }, "jest-runner": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-runner/-/jest-runner-24.8.0.tgz", - "integrity": "sha512-utFqC5BaA3JmznbissSs95X1ZF+d+4WuOWwpM9+Ak356YtMhHE/GXUondZdcyAAOTBEsRGAgH/0TwLzfI9h7ow==", - "dev": true, - "requires": { - "@jest/console": "^24.7.1", - "@jest/environment": "^24.8.0", - "@jest/test-result": "^24.8.0", - "@jest/types": "^24.8.0", - "chalk": "^2.4.2", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-runner/-/jest-runner-26.6.0.tgz", + "integrity": "sha512-QpeN6pje8PQvFgT+wYOlzeycKd67qAvSw5FgYBiX2cTW+QTiObTzv/k09qRvT09rcCntFxUhy9VB1mgNGFLYIA==", + "dev": true, + "requires": { + "@jest/console": "^26.6.0", + "@jest/environment": "^26.6.0", + "@jest/test-result": "^26.6.0", + "@jest/types": "^26.6.0", + "@types/node": "*", + "chalk": "^4.0.0", + "emittery": "^0.7.1", "exit": "^0.1.2", - "graceful-fs": "^4.1.15", - "jest-config": "^24.8.0", - "jest-docblock": "^24.3.0", - "jest-haste-map": "^24.8.0", - "jest-jasmine2": "^24.8.0", - "jest-leak-detector": "^24.8.0", - "jest-message-util": "^24.8.0", - "jest-resolve": "^24.8.0", - "jest-runtime": "^24.8.0", - "jest-util": "^24.8.0", - "jest-worker": "^24.6.0", + "graceful-fs": "^4.2.4", + "jest-config": "^26.6.0", + "jest-docblock": "^26.0.0", + "jest-haste-map": "^26.6.0", + "jest-leak-detector": "^26.6.0", + "jest-message-util": "^26.6.0", + "jest-resolve": "^26.6.0", + "jest-runtime": "^26.6.0", + "jest-util": "^26.6.0", + "jest-worker": "^26.5.0", "source-map-support": "^0.5.6", - "throat": "^4.0.0" + "throat": "^5.0.0" } }, "jest-runtime": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-runtime/-/jest-runtime-24.8.0.tgz", - "integrity": "sha512-Mq0aIXhvO/3bX44ccT+czU1/57IgOMyy80oM0XR/nyD5zgBcesF84BPabZi39pJVA6UXw+fY2Q1N+4BiVUBWOA==", - "dev": true, - "requires": { - "@jest/console": "^24.7.1", - "@jest/environment": "^24.8.0", - "@jest/source-map": "^24.3.0", - "@jest/transform": "^24.8.0", - "@jest/types": "^24.8.0", - "@types/yargs": "^12.0.2", - "chalk": "^2.0.1", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-runtime/-/jest-runtime-26.6.0.tgz", + "integrity": "sha512-JEz4YGnybFvtN4NLID6lsZf0bcd8jccwjWcG5TRE3fYVnxoX1egTthPjnC4btIwWJ6QaaHhtOQ/E3AGn8iClAw==", + "dev": true, + "requires": { + "@jest/console": "^26.6.0", + "@jest/environment": "^26.6.0", + "@jest/fake-timers": "^26.6.0", + "@jest/globals": "^26.6.0", + "@jest/source-map": "^26.5.0", + "@jest/test-result": "^26.6.0", + "@jest/transform": "^26.6.0", + "@jest/types": "^26.6.0", + "@types/yargs": "^15.0.0", + "chalk": "^4.0.0", + "collect-v8-coverage": "^1.0.0", "exit": "^0.1.2", "glob": "^7.1.3", - "graceful-fs": "^4.1.15", - "jest-config": "^24.8.0", - "jest-haste-map": "^24.8.0", - "jest-message-util": "^24.8.0", - "jest-mock": "^24.8.0", - "jest-regex-util": "^24.3.0", - "jest-resolve": "^24.8.0", - "jest-snapshot": "^24.8.0", - "jest-util": "^24.8.0", - "jest-validate": "^24.8.0", - "realpath-native": "^1.1.0", - "slash": "^2.0.0", - "strip-bom": "^3.0.0", - "yargs": "^12.0.2" + "graceful-fs": "^4.2.4", + "jest-config": "^26.6.0", + "jest-haste-map": "^26.6.0", + "jest-message-util": "^26.6.0", + "jest-mock": "^26.6.0", + "jest-regex-util": "^26.0.0", + "jest-resolve": "^26.6.0", + "jest-snapshot": "^26.6.0", + "jest-util": "^26.6.0", + "jest-validate": "^26.6.0", + "slash": "^3.0.0", + "strip-bom": "^4.0.0", + "yargs": "^15.4.1" } }, "jest-serializer": { - "version": "24.4.0", - "resolved": "https://registry.npmjs.org/jest-serializer/-/jest-serializer-24.4.0.tgz", - "integrity": "sha512-k//0DtglVstc1fv+GY/VHDIjrtNjdYvYjMlbLUed4kxrE92sIUewOi5Hj3vrpB8CXfkJntRPDRjCrCvUhBdL8Q==", - "dev": true + "version": "26.5.0", + "resolved": "https://registry.npmjs.org/jest-serializer/-/jest-serializer-26.5.0.tgz", + "integrity": "sha512-+h3Gf5CDRlSLdgTv7y0vPIAoLgX/SI7T4v6hy+TEXMgYbv+ztzbg5PSN6mUXAT/hXYHvZRWm+MaObVfqkhCGxA==", + "dev": true, + "requires": { + "@types/node": "*", + "graceful-fs": "^4.2.4" + } }, "jest-snapshot": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-snapshot/-/jest-snapshot-24.8.0.tgz", - "integrity": "sha512-5ehtWoc8oU9/cAPe6fez6QofVJLBKyqkY2+TlKTOf0VllBB/mqUNdARdcjlZrs9F1Cv+/HKoCS/BknT0+tmfPg==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-snapshot/-/jest-snapshot-26.6.0.tgz", + "integrity": "sha512-mcqJZeIZqxomvBcsaiIbiEe2g7K1UxnUpTwjMoHb+DX4uFGnuZoZ6m28YOYRyCfZsdU9mmq73rNBnEH2atTR4Q==", "dev": true, "requires": { "@babel/types": "^7.0.0", - "@jest/types": "^24.8.0", - "chalk": "^2.0.1", - "expect": "^24.8.0", - "jest-diff": "^24.8.0", - "jest-matcher-utils": "^24.8.0", - "jest-message-util": "^24.8.0", - "jest-resolve": "^24.8.0", - "mkdirp": "^0.5.1", + "@jest/types": "^26.6.0", + "@types/babel__traverse": "^7.0.4", + "@types/prettier": "^2.0.0", + "chalk": "^4.0.0", + "expect": "^26.6.0", + "graceful-fs": "^4.2.4", + "jest-diff": "^26.6.0", + "jest-get-type": "^26.3.0", + "jest-haste-map": "^26.6.0", + "jest-matcher-utils": "^26.6.0", + "jest-message-util": "^26.6.0", + "jest-resolve": "^26.6.0", "natural-compare": "^1.4.0", - "pretty-format": "^24.8.0", - "semver": "^5.5.0" + "pretty-format": "^26.6.0", + "semver": "^7.3.2" + }, + "dependencies": { + "semver": { + "version": "7.3.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.2.tgz", + "integrity": "sha512-OrOb32TeeambH6UrhtShmF7CRDqhL6/5XpPNp2DuRH6+9QLw/orhp72j87v8Qa1ScDkvrrBNpZcDejAirJmfXQ==", + "dev": true + } } }, "jest-util": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-util/-/jest-util-24.8.0.tgz", - "integrity": "sha512-DYZeE+XyAnbNt0BG1OQqKy/4GVLPtzwGx5tsnDrFcax36rVE3lTA5fbvgmbVPUZf9w77AJ8otqR4VBbfFJkUZA==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-util/-/jest-util-26.6.0.tgz", + "integrity": "sha512-/cUGqcnKeZMjvTQLfJo65nBOEZ/k0RB/8usv2JpfYya05u0XvBmKkIH5o5c4nCh9DD61B1YQjMGGqh1Ha0aXdg==", "dev": true, "requires": { - "@jest/console": "^24.7.1", - "@jest/fake-timers": "^24.8.0", - "@jest/source-map": "^24.3.0", - "@jest/test-result": "^24.8.0", - "@jest/types": "^24.8.0", - "callsites": "^3.0.0", - "chalk": "^2.0.1", - "graceful-fs": "^4.1.15", + "@jest/types": "^26.6.0", + "@types/node": "*", + "chalk": "^4.0.0", + "graceful-fs": "^4.2.4", "is-ci": "^2.0.0", - "mkdirp": "^0.5.1", - "slash": "^2.0.0", - "source-map": "^0.6.0" + "micromatch": "^4.0.2" } }, "jest-validate": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-validate/-/jest-validate-24.8.0.tgz", - "integrity": "sha512-+/N7VOEMW1Vzsrk3UWBDYTExTPwf68tavEPKDnJzrC6UlHtUDU/fuEdXqFoHzv9XnQ+zW6X3qMZhJ3YexfeLDA==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-validate/-/jest-validate-26.6.0.tgz", + "integrity": "sha512-FKHNqvh1Pgs4NWas56gsTPmjcIoGAAzSVUCK1+g8euzuCGbmdEr8LRTtOEFjd29uMZUk0PhzmzKGlHPe6j3UWw==", "dev": true, "requires": { - "@jest/types": "^24.8.0", - "camelcase": "^5.0.0", - "chalk": "^2.0.1", - "jest-get-type": "^24.8.0", - "leven": "^2.1.0", - "pretty-format": "^24.8.0" + "@jest/types": "^26.6.0", + "camelcase": "^6.0.0", + "chalk": "^4.0.0", + "jest-get-type": "^26.3.0", + "leven": "^3.1.0", + "pretty-format": "^26.6.0" + }, + "dependencies": { + "camelcase": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-6.1.0.tgz", + "integrity": "sha512-WCMml9ivU60+8rEJgELlFp1gxFcEGxwYleE3bziHEDeqsqAWGHdimB7beBFGjLzVNgPGyDsfgXLQEYMpmIFnVQ==", + "dev": true + } } }, "jest-watcher": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/jest-watcher/-/jest-watcher-24.8.0.tgz", - "integrity": "sha512-SBjwHt5NedQoVu54M5GEx7cl7IGEFFznvd/HNT8ier7cCAx/Qgu9ZMlaTQkvK22G1YOpcWBLQPFSImmxdn3DAw==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/jest-watcher/-/jest-watcher-26.6.0.tgz", + "integrity": "sha512-gw5BvcgPi0PKpMlNWQjUet5C5A4JOYrT7gexdP6+DR/f7mRm7wE0o1GqwPwcTsTwo0/FNf9c/kIDXTRaSAYwlw==", "dev": true, "requires": { - "@jest/test-result": "^24.8.0", - "@jest/types": "^24.8.0", - "@types/yargs": "^12.0.9", - "ansi-escapes": "^3.0.0", - "chalk": "^2.0.1", - "jest-util": "^24.8.0", - "string-length": "^2.0.0" + "@jest/test-result": "^26.6.0", + "@jest/types": "^26.6.0", + "@types/node": "*", + "ansi-escapes": "^4.2.1", + "chalk": "^4.0.0", + "jest-util": "^26.6.0", + "string-length": "^4.0.1" } }, "jest-worker": { - "version": "24.6.0", - "resolved": "https://registry.npmjs.org/jest-worker/-/jest-worker-24.6.0.tgz", - "integrity": "sha512-jDwgW5W9qGNvpI1tNnvajh0a5IE/PuGLFmHk6aR/BZFz8tSgGw17GsDPXAJ6p91IvYDjOw8GpFbvvZGAK+DPQQ==", + "version": "26.5.0", + "resolved": "https://registry.npmjs.org/jest-worker/-/jest-worker-26.5.0.tgz", + "integrity": "sha512-kTw66Dn4ZX7WpjZ7T/SUDgRhapFRKWmisVAF0Rv4Fu8SLFD7eLbqpLvbxVqYhSgaWa7I+bW7pHnbyfNsH6stug==", "dev": true, "requires": { - "merge-stream": "^1.0.1", - "supports-color": "^6.1.0" - }, - "dependencies": { - "supports-color": { - "version": "6.1.0", - "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz", - "integrity": "sha512-qe1jfm1Mg7Nq/NSh6XE24gPXROEVsWHxC1LIx//XNlD9iw7YZQGjZNjYN7xGaEG6iKdA8EtNFW6R0gjnVXp+wQ==", - "dev": true, - "requires": { - "has-flag": "^3.0.0" - } - } + "@types/node": "*", + "merge-stream": "^2.0.0", + "supports-color": "^7.0.0" } }, "js-tokens": { @@ -3020,6 +2818,16 @@ "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==", "dev": true }, + "js-yaml": { + "version": "3.14.0", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.0.tgz", + "integrity": "sha512-/4IbIeHcD9VMHFqDR/gQ7EdZdLimOvW2DdcxFjdyyZ9NsbS+ccrXqVWDtab/lRl5AlUqmpBx8EhPaWR+OtY17A==", + "dev": true, + "requires": { + "argparse": "^1.0.7", + "esprima": "^4.0.0" + } + }, "jsbn": { "version": "0.1.1", "resolved": "https://registry.npmjs.org/jsbn/-/jsbn-0.1.1.tgz", @@ -3027,36 +2835,36 @@ "dev": true }, "jsdom": { - "version": "11.12.0", - "resolved": "https://registry.npmjs.org/jsdom/-/jsdom-11.12.0.tgz", - "integrity": "sha512-y8Px43oyiBM13Zc1z780FrfNLJCXTL40EWlty/LXUtcjykRBNgLlCjWXpfSPBl2iv+N7koQN+dvqszHZgT/Fjw==", - "dev": true, - "requires": { - "abab": "^2.0.0", - "acorn": "^5.5.3", - "acorn-globals": "^4.1.0", - "array-equal": "^1.0.0", - "cssom": ">= 0.3.2 < 0.4.0", - "cssstyle": "^1.0.0", - "data-urls": "^1.0.0", - "domexception": "^1.0.1", - "escodegen": "^1.9.1", - "html-encoding-sniffer": "^1.0.2", - "left-pad": "^1.3.0", - "nwsapi": "^2.0.7", - "parse5": "4.0.0", - "pn": "^1.1.0", - "request": "^2.87.0", - "request-promise-native": "^1.0.5", - "sax": "^1.2.4", - "symbol-tree": "^3.2.2", - "tough-cookie": "^2.3.4", - "w3c-hr-time": "^1.0.1", - "webidl-conversions": "^4.0.2", - "whatwg-encoding": "^1.0.3", - "whatwg-mimetype": "^2.1.0", - "whatwg-url": "^6.4.1", - "ws": "^5.2.0", + "version": "16.4.0", + "resolved": "https://registry.npmjs.org/jsdom/-/jsdom-16.4.0.tgz", + "integrity": "sha512-lYMm3wYdgPhrl7pDcRmvzPhhrGVBeVhPIqeHjzeiHN3DFmD1RBpbExbi8vU7BJdH8VAZYovR8DMt0PNNDM7k8w==", + "dev": true, + "requires": { + "abab": "^2.0.3", + "acorn": "^7.1.1", + "acorn-globals": "^6.0.0", + "cssom": "^0.4.4", + "cssstyle": "^2.2.0", + "data-urls": "^2.0.0", + "decimal.js": "^10.2.0", + "domexception": "^2.0.1", + "escodegen": "^1.14.1", + "html-encoding-sniffer": "^2.0.1", + "is-potential-custom-element-name": "^1.0.0", + "nwsapi": "^2.2.0", + "parse5": "5.1.1", + "request": "^2.88.2", + "request-promise-native": "^1.0.8", + "saxes": "^5.0.0", + "symbol-tree": "^3.2.4", + "tough-cookie": "^3.0.1", + "w3c-hr-time": "^1.0.2", + "w3c-xmlserializer": "^2.0.0", + "webidl-conversions": "^6.1.0", + "whatwg-encoding": "^1.0.5", + "whatwg-mimetype": "^2.3.0", + "whatwg-url": "^8.0.0", + "ws": "^7.2.3", "xml-name-validator": "^3.0.0" } }, @@ -3066,10 +2874,10 @@ "integrity": "sha512-OYu7XEzjkCQ3C5Ps3QIZsQfNpqoJyZZA99wd9aWd05NCtC5pWOkShK2mkL6HXQR6/Cy2lbNdPlZBpuQHXE63gA==", "dev": true }, - "json-parse-better-errors": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/json-parse-better-errors/-/json-parse-better-errors-1.0.2.tgz", - "integrity": "sha512-mrqyZKfX5EhL7hvqcV6WG1yYjnjeuYDzDhhcAAUrq8Po85NBQBJP+ZDUT75qZQ98IkUoBqdkExkukOU7Ts2wrw==", + "json-parse-even-better-errors": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz", + "integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==", "dev": true }, "json-schema": { @@ -3091,12 +2899,12 @@ "dev": true }, "json5": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/json5/-/json5-2.1.0.tgz", - "integrity": "sha512-8Mh9h6xViijj36g7Dxi+Y4S6hNGV96vcJZr/SrlHh1LR/pEn/8j/+qIBbs44YKl69Lrfctp4QD+AdWLTMqEZAQ==", + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/json5/-/json5-2.1.3.tgz", + "integrity": "sha512-KXPvOm8K9IJKFM0bmdn8QXh7udDh1g/giieX0NLCaMnb4hEiVFqnop2ImTXCc5e0/oHz3LTqmHGtExn5hfMkOA==", "dev": true, "requires": { - "minimist": "^1.2.0" + "minimist": "^1.2.5" } }, "jsprim": { @@ -3112,9 +2920,9 @@ } }, "kind-of": { - "version": "6.0.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz", - "integrity": "sha512-s5kLOcnH0XqDO+FvuaLX8DDjZ18CGFk7VygH40QoKPUQhW4e2rvM0rwUq0t8IQDOwYSeLK01U90OjzBTme2QqA==", + "version": "6.0.3", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz", + "integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw==", "dev": true }, "kleur": { @@ -3123,25 +2931,10 @@ "integrity": "sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w==", "dev": true }, - "lcid": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/lcid/-/lcid-2.0.0.tgz", - "integrity": "sha512-avPEb8P8EGnwXKClwsNUgryVjllcRqtMYa49NTsbQagYuT1DcXnl1915oxWjoyGrXR6zH/Y0Zc96xWsPcoDKeA==", - "dev": true, - "requires": { - "invert-kv": "^2.0.0" - } - }, - "left-pad": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/left-pad/-/left-pad-1.3.0.tgz", - "integrity": "sha512-XI5MPzVNApjAyhQzphX8BkmKsKUxD4LdyK24iZeQGinBN9yTQT3bFlCBy/aVx2HrNcqQGsdot8ghrjyrvMCoEA==", - "dev": true - }, "leven": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/leven/-/leven-2.1.0.tgz", - "integrity": "sha1-wuep93IJTe6dNCAq6KzORoeHVYA=", + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/leven/-/leven-3.1.0.tgz", + "integrity": "sha512-qsda+H8jTaUaN/x5vzW2rzc+8Rw4TAQ/4KjB46IwK5VH+IlVeeeje/EoZRpiXvIqjFgK84QffqPztGI3VBLG1A==", "dev": true }, "levn": { @@ -3154,32 +2947,25 @@ "type-check": "~0.3.2" } }, - "load-json-file": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/load-json-file/-/load-json-file-4.0.0.tgz", - "integrity": "sha1-L19Fq5HjMhYjT9U62rZo607AmTs=", - "dev": true, - "requires": { - "graceful-fs": "^4.1.2", - "parse-json": "^4.0.0", - "pify": "^3.0.0", - "strip-bom": "^3.0.0" - } + "lines-and-columns": { + "version": "1.1.6", + "resolved": "https://registry.npmjs.org/lines-and-columns/-/lines-and-columns-1.1.6.tgz", + "integrity": "sha1-HADHQ7QzzQpOgHWPe2SldEDZ/wA=", + "dev": true }, "locate-path": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", - "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz", + "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==", "dev": true, "requires": { - "p-locate": "^3.0.0", - "path-exists": "^3.0.0" + "p-locate": "^4.1.0" } }, "lodash": { - "version": "4.17.11", - "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz", - "integrity": "sha512-cQKh8igo5QUhZ7lg38DYWAxMvjSAKG0A8wGSVimP07SIUEK2UO+arSRKbRZWtelMtN5V0Hkwh5ryOto/SshYIg==", + "version": "4.17.20", + "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz", + "integrity": "sha512-PlhdFcillOINfeV7Ni6oF1TAEayyZBoZ8bcshTHqOYJYlrqzRK5hagpagky5o4HfCzzd1TRkXPMFq6cKk9rGmA==", "dev": true }, "lodash.sortby": { @@ -3188,29 +2974,19 @@ "integrity": "sha1-7dFMgk4sycHgsKG0K7UhBRakJDg=", "dev": true }, - "loose-envify": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz", - "integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==", - "dev": true, - "requires": { - "js-tokens": "^3.0.0 || ^4.0.0" - } - }, "make-dir": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-2.1.0.tgz", - "integrity": "sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==", + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-3.1.0.tgz", + "integrity": "sha512-g3FeP20LNwhALb/6Cz6Dd4F2ngze0jz7tbzrD2wAV+o9FeNHe4rL+yK2md0J/fiSf1sa1ADhXqi5+oVwOM/eGw==", "dev": true, "requires": { - "pify": "^4.0.1", - "semver": "^5.6.0" + "semver": "^6.0.0" }, "dependencies": { - "pify": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/pify/-/pify-4.0.1.tgz", - "integrity": "sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==", + "semver": { + "version": "6.3.0", + "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz", + "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw==", "dev": true } } @@ -3224,15 +3000,6 @@ "tmpl": "1.0.x" } }, - "map-age-cleaner": { - "version": "0.1.3", - "resolved": "https://registry.npmjs.org/map-age-cleaner/-/map-age-cleaner-0.1.3.tgz", - "integrity": "sha512-bJzx6nMoP6PDLPBFmg7+xRKeFZvFboMrGlxmNj9ClvX53KrmvM5bXFXEWjbz4cz1AFn+jWJ9z/DJSz7hrs0w3w==", - "dev": true, - "requires": { - "p-defer": "^1.0.0" - } - }, "map-cache": { "version": "0.2.2", "resolved": "https://registry.npmjs.org/map-cache/-/map-cache-0.2.2.tgz", @@ -3248,60 +3015,35 @@ "object-visit": "^1.0.0" } }, - "mem": { - "version": "4.3.0", - "resolved": "https://registry.npmjs.org/mem/-/mem-4.3.0.tgz", - "integrity": "sha512-qX2bG48pTqYRVmDB37rn/6PT7LcR8T7oAX3bf99u1Tt1nzxYfxkgqDwUwolPlXweM0XzBOBFzSx4kfp7KP1s/w==", - "dev": true, - "requires": { - "map-age-cleaner": "^0.1.1", - "mimic-fn": "^2.0.0", - "p-is-promise": "^2.0.0" - } - }, "merge-stream": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-1.0.1.tgz", - "integrity": "sha1-QEEgLVCKNCugAXQAjfDCUbjBNeE=", - "dev": true, - "requires": { - "readable-stream": "^2.0.1" - } + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz", + "integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==", + "dev": true }, "micromatch": { - "version": "3.1.10", - "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-3.1.10.tgz", - "integrity": "sha512-MWikgl9n9M3w+bpsY3He8L+w9eF9338xRl8IAO5viDizwSzziFEyUzo2xrrloB64ADbTf8uA8vRqqttDTOmccg==", + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.2.tgz", + "integrity": "sha512-y7FpHSbMUMoyPbYUSzO6PaZ6FyRnQOpHuKwbo1G+Knck95XVU4QAiKdGEnj5wwoS7PlOgthX/09u5iFJ+aYf5Q==", "dev": true, "requires": { - "arr-diff": "^4.0.0", - "array-unique": "^0.3.2", - "braces": "^2.3.1", - "define-property": "^2.0.2", - "extend-shallow": "^3.0.2", - "extglob": "^2.0.4", - "fragment-cache": "^0.2.1", - "kind-of": "^6.0.2", - "nanomatch": "^1.2.9", - "object.pick": "^1.3.0", - "regex-not": "^1.0.0", - "snapdragon": "^0.8.1", - "to-regex": "^3.0.2" + "braces": "^3.0.1", + "picomatch": "^2.0.5" } }, "mime-db": { - "version": "1.40.0", - "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.40.0.tgz", - "integrity": "sha512-jYdeOMPy9vnxEqFRRo6ZvTZ8d9oPb+k18PKoYNYUe2stVEBPPwsln/qWzdbmaIvnhZ9v2P+CuecK+fpUfsV2mA==", + "version": "1.44.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.44.0.tgz", + "integrity": "sha512-/NOTfLrsPBVeH7YtFPgsVWveuL+4SjjYxaQ1xtM1KMFj7HdxlBlxeyNLzhyJVx7r4rZGJAZ/6lkKCitSc/Nmpg==", "dev": true }, "mime-types": { - "version": "2.1.24", - "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.24.tgz", - "integrity": "sha512-WaFHS3MCl5fapm3oLxU4eYDw77IQM2ACcxQ9RIxfaC3ooc6PFuBMGZZsYpvoXS5D5QTWPieo1jjLdAm3TBP3cQ==", + "version": "2.1.27", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.27.tgz", + "integrity": "sha512-JIhqnCasI9yD+SsmkquHBxTSEuZdQX5BuQnS2Vc7puQQQ+8yiP5AY5uWhpdv4YL4VM5c6iliiYWPgJ/nJQLp7w==", "dev": true, "requires": { - "mime-db": "1.40.0" + "mime-db": "1.44.0" } }, "mimic-fn": { @@ -3320,9 +3062,9 @@ } }, "minimist": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz", - "integrity": "sha1-o1AIsg9BOD7sH7kU9M1d95omQoQ=", + "version": "1.2.5", + "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz", + "integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw==", "dev": true }, "mixin-deep": { @@ -3346,36 +3088,12 @@ } } }, - "mkdirp": { - "version": "0.5.1", - "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.1.tgz", - "integrity": "sha1-MAV0OOrGz3+MR2fzhkjWaX11yQM=", - "dev": true, - "requires": { - "minimist": "0.0.8" - }, - "dependencies": { - "minimist": { - "version": "0.0.8", - "resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz", - "integrity": "sha1-hX/Kv8M5fSYluCKCYuhqp6ARsF0=", - "dev": true - } - } - }, "ms": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", - "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", + "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==", "dev": true }, - "nan": { - "version": "2.14.0", - "resolved": "https://registry.npmjs.org/nan/-/nan-2.14.0.tgz", - "integrity": "sha512-INOFj37C7k3AfaNTtX8RhsTw7qRy7eLET14cROi9+5HAVbbHuIWUHEauBv5qT4Av2tWasiTY1Jw6puUNqRJXQg==", - "dev": true, - "optional": true - }, "nanomatch": { "version": "1.2.13", "resolved": "https://registry.npmjs.org/nanomatch/-/nanomatch-1.2.13.tgz", @@ -3401,12 +3119,6 @@ "integrity": "sha1-Sr6/7tdUHywnrPspvbvRXI1bpPc=", "dev": true }, - "neo-async": { - "version": "2.6.1", - "resolved": "https://registry.npmjs.org/neo-async/-/neo-async-2.6.1.tgz", - "integrity": "sha512-iyam8fBuCUpWeKPGpaNMetEocMt364qkCsfL9JuhjXX6dRnguRVOfk2GZaDpPjcOKiiXCPINZC1GczQ7iTq3Zw==", - "dev": true - }, "nice-try": { "version": "1.0.5", "resolved": "https://registry.npmjs.org/nice-try/-/nice-try-1.0.5.tgz", @@ -3426,16 +3138,37 @@ "dev": true }, "node-notifier": { - "version": "5.4.0", - "resolved": "https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.0.tgz", - "integrity": "sha512-SUDEb+o71XR5lXSTyivXd9J7fCloE3SyP4lSgt3lU2oSANiox+SxlNRGPjDKrwU1YN3ix2KN/VGGCg0t01rttQ==", + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/node-notifier/-/node-notifier-8.0.0.tgz", + "integrity": "sha512-46z7DUmcjoYdaWyXouuFNNfUo6eFa94t23c53c+lG/9Cvauk4a98rAUp9672X5dxGdQmLpPzTxzu8f/OeEPaFA==", "dev": true, + "optional": true, "requires": { "growly": "^1.3.0", - "is-wsl": "^1.1.0", - "semver": "^5.5.0", + "is-wsl": "^2.2.0", + "semver": "^7.3.2", "shellwords": "^0.1.1", - "which": "^1.3.0" + "uuid": "^8.3.0", + "which": "^2.0.2" + }, + "dependencies": { + "semver": { + "version": "7.3.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.2.tgz", + "integrity": "sha512-OrOb32TeeambH6UrhtShmF7CRDqhL6/5XpPNp2DuRH6+9QLw/orhp72j87v8Qa1ScDkvrrBNpZcDejAirJmfXQ==", + "dev": true, + "optional": true + }, + "which": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "dev": true, + "optional": true, + "requires": { + "isexe": "^2.0.0" + } + } } }, "normalize-package-data": { @@ -3451,13 +3184,10 @@ } }, "normalize-path": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-2.1.1.tgz", - "integrity": "sha1-GrKLVW4Zg2Oowab35vogE3/mrtk=", - "dev": true, - "requires": { - "remove-trailing-separator": "^1.0.1" - } + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz", + "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==", + "dev": true }, "npm-run-path": { "version": "2.0.2", @@ -3468,16 +3198,10 @@ "path-key": "^2.0.0" } }, - "number-is-nan": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/number-is-nan/-/number-is-nan-1.0.1.tgz", - "integrity": "sha1-CXtgK1NCKlIsGvuHkDGDNpQaAR0=", - "dev": true - }, "nwsapi": { - "version": "2.1.4", - "resolved": "https://registry.npmjs.org/nwsapi/-/nwsapi-2.1.4.tgz", - "integrity": "sha512-iGfd9Y6SFdTNldEy2L0GUhcarIutFmk+MPWIn9dmj8NMIup03G08uUF2KGbbmv/Ux4RT0VZJoP/sVbWA6d/VIw==", + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/nwsapi/-/nwsapi-2.2.0.tgz", + "integrity": "sha512-h2AatdwYH+JHiZpv7pt/gSX1XoRGb7L/qSIeuqA6GwYoF9w1vP1cw42TO0aI2pNyshRK5893hNSl+1//vHK7hQ==", "dev": true }, "oauth-sign": { @@ -3517,12 +3241,6 @@ } } }, - "object-keys": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz", - "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA==", - "dev": true - }, "object-visit": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/object-visit/-/object-visit-1.0.1.tgz", @@ -3532,16 +3250,6 @@ "isobject": "^3.0.0" } }, - "object.getownpropertydescriptors": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/object.getownpropertydescriptors/-/object.getownpropertydescriptors-2.0.3.tgz", - "integrity": "sha1-h1jIRvW0B62rDyNuCYbxSwUcqhY=", - "dev": true, - "requires": { - "define-properties": "^1.1.2", - "es-abstract": "^1.5.1" - } - }, "object.pick": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/object.pick/-/object.pick-1.3.0.tgz", @@ -3560,71 +3268,34 @@ "wrappy": "1" } }, - "optimist": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/optimist/-/optimist-0.6.1.tgz", - "integrity": "sha1-2j6nRob6IaGaERwybpDrFaAZZoY=", + "onetime": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.2.tgz", + "integrity": "sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==", "dev": true, "requires": { - "minimist": "~0.0.1", - "wordwrap": "~0.0.2" - }, - "dependencies": { - "minimist": { - "version": "0.0.10", - "resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz", - "integrity": "sha1-3j+YVD2/lggr5IrRoMfNqDYwHc8=", - "dev": true - } + "mimic-fn": "^2.1.0" } }, "optionator": { - "version": "0.8.2", - "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.8.2.tgz", - "integrity": "sha1-NkxeQJ0/TWMB1sC0wFu6UBgK62Q=", + "version": "0.8.3", + "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.8.3.tgz", + "integrity": "sha512-+IW9pACdk3XWmmTXG8m3upGUJst5XRGzxMRjXzAuJ1XnIFNvfhjjIuYkDvysnPQ7qzqVzLt78BCruntqRhWQbA==", "dev": true, "requires": { "deep-is": "~0.1.3", - "fast-levenshtein": "~2.0.4", + "fast-levenshtein": "~2.0.6", "levn": "~0.3.0", "prelude-ls": "~1.1.2", "type-check": "~0.3.2", - "wordwrap": "~1.0.0" - }, - "dependencies": { - "wordwrap": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/wordwrap/-/wordwrap-1.0.0.tgz", - "integrity": "sha1-J1hIEIkUVqQXHI0CJkQa3pDLyus=", - "dev": true - } - } - }, - "os-locale": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/os-locale/-/os-locale-3.1.0.tgz", - "integrity": "sha512-Z8l3R4wYWM40/52Z+S265okfFj8Kt2cC2MKY+xNi3kFs+XGI7WXu/I309QQQYbRW4ijiZ+yxs9pqEhJh0DqW3Q==", - "dev": true, - "requires": { - "execa": "^1.0.0", - "lcid": "^2.0.0", - "mem": "^4.0.0" + "word-wrap": "~1.2.3" } }, - "p-defer": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/p-defer/-/p-defer-1.0.0.tgz", - "integrity": "sha1-n26xgvbJqozXQwBKfU+WsZaw+ww=", - "dev": true - }, "p-each-series": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/p-each-series/-/p-each-series-1.0.0.tgz", - "integrity": "sha1-kw89Et0fUOdDRFeiLNbwSsatf3E=", - "dev": true, - "requires": { - "p-reduce": "^1.0.0" - } + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/p-each-series/-/p-each-series-2.1.0.tgz", + "integrity": "sha512-ZuRs1miPT4HrjFa+9fRfOFXxGJfORgelKV9f9nNOWw2gl6gVsRaVDOQP0+MI0G0wGKns1Yacsu0GjOFbTK0JFQ==", + "dev": true }, "p-finally": { "version": "1.0.0", @@ -3632,36 +3303,24 @@ "integrity": "sha1-P7z7FbiZpEEjs0ttzBi3JDNqLK4=", "dev": true }, - "p-is-promise": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/p-is-promise/-/p-is-promise-2.1.0.tgz", - "integrity": "sha512-Y3W0wlRPK8ZMRbNq97l4M5otioeA5lm1z7bkNkxCka8HSPjR0xRWmpCmc9utiaLP9Jb1eD8BgeIxTW4AIF45Pg==", - "dev": true - }, "p-limit": { - "version": "2.2.0", - "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.2.0.tgz", - "integrity": "sha512-pZbTJpoUsCzV48Mc9Nh51VbwO0X9cuPFE8gYwx9BTCt9SF8/b7Zljd2fVgOxhIF/HDTKgpVzs+GPhyKfjLLFRQ==", + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz", + "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==", "dev": true, "requires": { "p-try": "^2.0.0" } }, "p-locate": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", - "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz", + "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==", "dev": true, "requires": { - "p-limit": "^2.0.0" + "p-limit": "^2.2.0" } }, - "p-reduce": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/p-reduce/-/p-reduce-1.0.0.tgz", - "integrity": "sha1-GMKw3ZNqRpClKfgjH1ig/bakffo=", - "dev": true - }, "p-try": { "version": "2.2.0", "resolved": "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz", @@ -3669,19 +3328,21 @@ "dev": true }, "parse-json": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-4.0.0.tgz", - "integrity": "sha1-vjX1Qlvh9/bHRxhPmKeIy5lHfuA=", + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-5.1.0.tgz", + "integrity": "sha512-+mi/lmVVNKFNVyLXV31ERiy2CY5E1/F6QtJFEzoChPRwwngMNXRDQ9GJ5WdE2Z2P4AujsOi0/+2qHID68KwfIQ==", "dev": true, "requires": { + "@babel/code-frame": "^7.0.0", "error-ex": "^1.3.1", - "json-parse-better-errors": "^1.0.1" + "json-parse-even-better-errors": "^2.3.0", + "lines-and-columns": "^1.1.6" } }, "parse5": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/parse5/-/parse5-4.0.0.tgz", - "integrity": "sha512-VrZ7eOd3T1Fk4XWNXMgiGBK/z0MG48BWG2uQNU4I72fkQuKUTZpl+u9k+CxEG0twMVzSmXEEz12z5Fnw1jIQFA==", + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/parse5/-/parse5-5.1.1.tgz", + "integrity": "sha512-ugq4DFI0Ptb+WWjAdOK16+u/nHfiIrcE+sh8kZMaM0WllQKLI9rOUq6c2b7cwPkXdzfQESqvoqK6ug7U/Yyzug==", "dev": true }, "pascalcase": { @@ -3691,9 +3352,9 @@ "dev": true }, "path-exists": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", - "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", + "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w==", "dev": true }, "path-is-absolute": { @@ -3714,25 +3375,16 @@ "integrity": "sha512-GSmOT2EbHrINBf9SR7CDELwlJ8AENk3Qn7OikK4nFYAu3Ote2+JYNVvkpAEQm3/TLNEJFD/xZJjzyxg3KBWOzw==", "dev": true }, - "path-type": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/path-type/-/path-type-3.0.0.tgz", - "integrity": "sha512-T2ZUsdZFHgA3u4e5PfPbjd7HDDpxPnQb5jN0SrDsjNSuVXHJqtwTnWqG0B1jZrgmJ/7lj1EmVIByWt1gxGkWvg==", - "dev": true, - "requires": { - "pify": "^3.0.0" - } - }, "performance-now": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/performance-now/-/performance-now-2.1.0.tgz", "integrity": "sha1-Ywn04OX6kT7BxpMHrjZLSzd8nns=", "dev": true }, - "pify": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz", - "integrity": "sha1-5aSs0sEB/fPZpNB/DbxNtJ3SgXY=", + "picomatch": { + "version": "2.2.2", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.2.2.tgz", + "integrity": "sha512-q0M/9eZHzmr0AulXyPwNfZjtwZ/RBZlbN3K3CErVrk50T2ASYI7Bye0EvekFY3IP1Nt2DHu0re+V2ZHIpMkuWg==", "dev": true }, "pirates": { @@ -3745,20 +3397,14 @@ } }, "pkg-dir": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-3.0.0.tgz", - "integrity": "sha512-/E57AYkoeQ25qkxMj5PBOVgF8Kiu/h7cYS30Z5+R7WaiCCBfLq58ZI/dSeaEKb9WVJV5n/03QwrN3IeWIFllvw==", + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz", + "integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==", "dev": true, "requires": { - "find-up": "^3.0.0" + "find-up": "^4.0.0" } }, - "pn": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/pn/-/pn-1.1.0.tgz", - "integrity": "sha512-2qHaIQr2VLRFoxe2nASzsV6ef4yOOH+Fi9FBOVH6cqeSgUnoyySPZkxzLuzd+RYOQTRpROA0ztTMqxROKSb/nA==", - "dev": true - }, "posix-character-classes": { "version": "0.1.1", "resolved": "https://registry.npmjs.org/posix-character-classes/-/posix-character-classes-0.1.1.tgz", @@ -3772,37 +3418,31 @@ "dev": true }, "pretty-format": { - "version": "24.8.0", - "resolved": "https://registry.npmjs.org/pretty-format/-/pretty-format-24.8.0.tgz", - "integrity": "sha512-P952T7dkrDEplsR+TuY7q3VXDae5Sr7zmQb12JU/NDQa/3CH7/QW0yvqLcGN6jL+zQFKaoJcPc+yJxMTGmosqw==", + "version": "26.6.0", + "resolved": "https://registry.npmjs.org/pretty-format/-/pretty-format-26.6.0.tgz", + "integrity": "sha512-Uumr9URVB7bm6SbaByXtx+zGlS+0loDkFMHP0kHahMjmfCtmFY03iqd++5v3Ld6iB5TocVXlBN/T+DXMn9d4BA==", "dev": true, "requires": { - "@jest/types": "^24.8.0", - "ansi-regex": "^4.0.0", - "ansi-styles": "^3.2.0", - "react-is": "^16.8.4" + "@jest/types": "^26.6.0", + "ansi-regex": "^5.0.0", + "ansi-styles": "^4.0.0", + "react-is": "^16.12.0" } }, - "process-nextick-args": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", - "integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==", - "dev": true - }, "prompts": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.1.0.tgz", - "integrity": "sha512-+x5TozgqYdOwWsQFZizE/Tra3fKvAoy037kOyU6cgz84n8f6zxngLOV4O32kTwt9FcLCxAqw0P/c8rOr9y+Gfg==", + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.3.2.tgz", + "integrity": "sha512-Q06uKs2CkNYVID0VqwfAl9mipo99zkBv/n2JtWY89Yxa3ZabWSrs0e2KTudKVa3peLUvYXMefDqIleLPVUBZMA==", "dev": true, "requires": { - "kleur": "^3.0.2", - "sisteransi": "^1.0.0" + "kleur": "^3.0.3", + "sisteransi": "^1.0.4" } }, "psl": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/psl/-/psl-1.2.0.tgz", - "integrity": "sha512-GEn74ZffufCmkDDLNcl3uuyF/aSD6exEyh1v/ZSdAomB82t6G9hzJVRx0jBmLDW+VfZqks3aScmMw9DszwUalA==", + "version": "1.8.0", + "resolved": "https://registry.npmjs.org/psl/-/psl-1.8.0.tgz", + "integrity": "sha512-RIdOzyoavK+hA18OGGWDqUTsCLhtA7IcZ/6NCs4fFJaHBDab+pDDmDIByWFRQJq2Cd7r1OoQxBGKOaztq+hjIQ==", "dev": true }, "pump": { @@ -3823,59 +3463,45 @@ }, "qs": { "version": "6.5.2", - "resolved": "https://registry.npmjs.org/qs/-/qs-6.5.2.tgz", - "integrity": "sha512-N5ZAX4/LxJmF+7wN74pUD6qAh9/wnvdQcjq9TZjevvXzSUo7bfmw91saqMjzGS2xq91/odN2dW/WOl7qQHNDGA==", - "dev": true - }, - "react-is": { - "version": "16.8.6", - "resolved": "https://registry.npmjs.org/react-is/-/react-is-16.8.6.tgz", - "integrity": "sha512-aUk3bHfZ2bRSVFFbbeVS4i+lNPZr3/WM5jT2J5omUVV1zzcs1nAaf3l51ctA5FFvCRbhrH0bdAsRRQddFJZPtA==", - "dev": true - }, - "read-pkg": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/read-pkg/-/read-pkg-3.0.0.tgz", - "integrity": "sha1-nLxoaXj+5l0WwA4rGcI3/Pbjg4k=", - "dev": true, - "requires": { - "load-json-file": "^4.0.0", - "normalize-package-data": "^2.3.2", - "path-type": "^3.0.0" - } - }, - "read-pkg-up": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/read-pkg-up/-/read-pkg-up-4.0.0.tgz", - "integrity": "sha512-6etQSH7nJGsK0RbG/2TeDzZFa8shjQ1um+SwQQ5cwKy0dhSXdOncEhb1CPpvQG4h7FyOV6EB6YlV0yJvZQNAkA==", - "dev": true, - "requires": { - "find-up": "^3.0.0", - "read-pkg": "^3.0.0" - } + "resolved": "https://registry.npmjs.org/qs/-/qs-6.5.2.tgz", + "integrity": "sha512-N5ZAX4/LxJmF+7wN74pUD6qAh9/wnvdQcjq9TZjevvXzSUo7bfmw91saqMjzGS2xq91/odN2dW/WOl7qQHNDGA==", + "dev": true + }, + "react-is": { + "version": "16.13.1", + "resolved": "https://registry.npmjs.org/react-is/-/react-is-16.13.1.tgz", + "integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==", + "dev": true }, - "readable-stream": { - "version": "2.3.6", - "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.6.tgz", - "integrity": "sha512-tQtKA9WIAhBF3+VLAseyMqZeBjW0AHJoxOtYqSUZNJxauErmLbVm2FW1y+J/YA9dUrAC39ITejlZWhVIwawkKw==", + "read-pkg": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/read-pkg/-/read-pkg-5.2.0.tgz", + "integrity": "sha512-Ug69mNOpfvKDAc2Q8DRpMjjzdtrnv9HcSMX+4VsZxD1aZ6ZzrIE7rlzXBtWTyhULSMKg076AW6WR5iZpD0JiOg==", "dev": true, "requires": { - "core-util-is": "~1.0.0", - "inherits": "~2.0.3", - "isarray": "~1.0.0", - "process-nextick-args": "~2.0.0", - "safe-buffer": "~5.1.1", - "string_decoder": "~1.1.1", - "util-deprecate": "~1.0.1" + "@types/normalize-package-data": "^2.4.0", + "normalize-package-data": "^2.5.0", + "parse-json": "^5.0.0", + "type-fest": "^0.6.0" + }, + "dependencies": { + "type-fest": { + "version": "0.6.0", + "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.6.0.tgz", + "integrity": "sha512-q+MB8nYR1KDLrgr4G5yemftpMC7/QLqVndBmEEdqzmNj5dcFOO4Oo8qlwZE3ULT3+Zim1F8Kq4cBnikNhlCMlg==", + "dev": true + } } }, - "realpath-native": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/realpath-native/-/realpath-native-1.1.0.tgz", - "integrity": "sha512-wlgPA6cCIIg9gKz0fgAPjnzh4yR/LnXovwuo9hvyGvx3h8nX4+/iLZplfUWasXpqD8BdnGnP5njOFjkUwPzvjA==", + "read-pkg-up": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/read-pkg-up/-/read-pkg-up-7.0.1.tgz", + "integrity": "sha512-zK0TB7Xd6JpCLmlLmufqykGE+/TlOePD6qKClNW7hHDKFh/J7/7gCWGR7joEQEW1bKq3a3yUZSObOoWLFQ4ohg==", "dev": true, "requires": { - "util.promisify": "^1.0.0" + "find-up": "^4.1.0", + "read-pkg": "^5.2.0", + "type-fest": "^0.8.1" } }, "regex-not": { @@ -3907,9 +3533,9 @@ "dev": true }, "request": { - "version": "2.88.0", - "resolved": "https://registry.npmjs.org/request/-/request-2.88.0.tgz", - "integrity": "sha512-NAqBSrijGLZdM0WZNsInLJpkJokL72XYjUpnB0iwsRgxh7dB6COrHnTBNwN0E+lHDAJzu7kLAkDeY08z2/A0hg==", + "version": "2.88.2", + "resolved": "https://registry.npmjs.org/request/-/request-2.88.2.tgz", + "integrity": "sha512-MsvtOrfG9ZcrOwAW+Qi+F6HbD0CWXEh9ou77uOb7FM2WPhwT7smM833PzanhJLsgXjN89Ir6V2PczXNnMpwKhw==", "dev": true, "requires": { "aws-sign2": "~0.7.0", @@ -3919,7 +3545,7 @@ "extend": "~3.0.2", "forever-agent": "~0.6.1", "form-data": "~2.3.2", - "har-validator": "~5.1.0", + "har-validator": "~5.1.3", "http-signature": "~1.2.0", "is-typedarray": "~1.0.0", "isstream": "~0.1.2", @@ -3929,47 +3555,59 @@ "performance-now": "^2.1.0", "qs": "~6.5.2", "safe-buffer": "^5.1.2", - "tough-cookie": "~2.4.3", + "tough-cookie": "~2.5.0", "tunnel-agent": "^0.6.0", "uuid": "^3.3.2" }, "dependencies": { - "punycode": { - "version": "1.4.1", - "resolved": "https://registry.npmjs.org/punycode/-/punycode-1.4.1.tgz", - "integrity": "sha1-wNWmOycYgArY4esPpSachN1BhF4=", - "dev": true - }, "tough-cookie": { - "version": "2.4.3", - "resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.4.3.tgz", - "integrity": "sha512-Q5srk/4vDM54WJsJio3XNn6K2sCG+CQ8G5Wz6bZhRZoAe/+TxjWB/GlFAnYEbkYVlON9FMk/fE3h2RLpPXo4lQ==", + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.5.0.tgz", + "integrity": "sha512-nlLsUzgm1kfLXSXfRZMc1KLAugd4hqJHDTvc2hDIwS3mZAfMEuMbc03SujMF+GEcpaX/qboeycw6iO8JwVv2+g==", "dev": true, "requires": { - "psl": "^1.1.24", - "punycode": "^1.4.1" + "psl": "^1.1.28", + "punycode": "^2.1.1" } + }, + "uuid": { + "version": "3.4.0", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-3.4.0.tgz", + "integrity": "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==", + "dev": true } } }, "request-promise-core": { - "version": "1.1.2", - "resolved": "https://registry.npmjs.org/request-promise-core/-/request-promise-core-1.1.2.tgz", - "integrity": "sha512-UHYyq1MO8GsefGEt7EprS8UrXsm1TxEvFUX1IMTuSLU2Rh7fTIdFtl8xD7JiEYiWU2dl+NYAjCTksTehQUxPag==", + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/request-promise-core/-/request-promise-core-1.1.4.tgz", + "integrity": "sha512-TTbAfBBRdWD7aNNOoVOBH4pN/KigV6LyapYNNlAPA8JwbovRti1E88m3sYAwsLi5ryhPKsE9APwnjFTgdUjTpw==", "dev": true, "requires": { - "lodash": "^4.17.11" + "lodash": "^4.17.19" } }, "request-promise-native": { - "version": "1.0.7", - "resolved": "https://registry.npmjs.org/request-promise-native/-/request-promise-native-1.0.7.tgz", - "integrity": "sha512-rIMnbBdgNViL37nZ1b3L/VfPOpSi0TqVDQPAvO6U14lMzOLrt5nilxCQqtDKhZeDiW0/hkCXGoQjhgJd/tCh6w==", + "version": "1.0.9", + "resolved": "https://registry.npmjs.org/request-promise-native/-/request-promise-native-1.0.9.tgz", + "integrity": "sha512-wcW+sIUiWnKgNY0dqCpOZkUbF/I+YPi+f09JZIDa39Ec+q82CpSYniDp+ISgTTbKmnpJWASeJBPZmoxH84wt3g==", "dev": true, "requires": { - "request-promise-core": "1.1.2", + "request-promise-core": "1.1.4", "stealthy-require": "^1.1.1", "tough-cookie": "^2.3.3" + }, + "dependencies": { + "tough-cookie": { + "version": "2.5.0", + "resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.5.0.tgz", + "integrity": "sha512-nlLsUzgm1kfLXSXfRZMc1KLAugd4hqJHDTvc2hDIwS3mZAfMEuMbc03SujMF+GEcpaX/qboeycw6iO8JwVv2+g==", + "dev": true, + "requires": { + "psl": "^1.1.28", + "punycode": "^2.1.1" + } + } } }, "require-directory": { @@ -3985,27 +3623,28 @@ "dev": true }, "resolve": { - "version": "1.11.1", - "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.11.1.tgz", - "integrity": "sha512-vIpgF6wfuJOZI7KKKSP+HmiKggadPQAdsp5HiC1mvqnfp0gF1vdwgBWZIdrVft9pgqoMFQN+R7BSWZiBxx+BBw==", + "version": "1.18.1", + "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.18.1.tgz", + "integrity": "sha512-lDfCPaMKfOJXjy0dPayzPdF1phampNWr3qFCjAu+rw/qbQmr5jWH5xN2hwh9QKfw9E5v4hwV7A+jrCmL8yjjqA==", "dev": true, "requires": { + "is-core-module": "^2.0.0", "path-parse": "^1.0.6" } }, "resolve-cwd": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-2.0.0.tgz", - "integrity": "sha1-AKn3OHVW4nA46uIyyqNypqWbZlo=", + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-3.0.0.tgz", + "integrity": "sha512-OrZaX2Mb+rJCpH/6CpSqt9xFVpN++x01XnN2ie9g6P5/3xelLAkXWVADpdz1IHD/KFfEXyE6V0U01OQ3UO2rEg==", "dev": true, "requires": { - "resolve-from": "^3.0.0" + "resolve-from": "^5.0.0" } }, "resolve-from": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-3.0.0.tgz", - "integrity": "sha1-six699nWiBvItuZTM17rywoYh0g=", + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz", + "integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw==", "dev": true }, "resolve-url": { @@ -4021,9 +3660,9 @@ "dev": true }, "rimraf": { - "version": "2.6.3", - "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.6.3.tgz", - "integrity": "sha512-mwqeW5XsA2qAejG46gYdENaxXjx9onRNCfn7L0duuP4hCuTIi/QO7PDK07KJfp1d+izWPrzEJDcSqBa0OZQriA==", + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz", + "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", "dev": true, "requires": { "glob": "^7.1.3" @@ -4071,18 +3710,145 @@ "micromatch": "^3.1.4", "minimist": "^1.1.1", "walker": "~1.0.5" + }, + "dependencies": { + "anymatch": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-2.0.0.tgz", + "integrity": "sha512-5teOsQWABXHHBFP9y3skS5P3d/WfWXpv3FUpy+LorMrNYaT9pI4oLMQX7jzQ2KklNpGpWHzdCXTDT2Y3XGlZBw==", + "dev": true, + "requires": { + "micromatch": "^3.1.4", + "normalize-path": "^2.1.1" + } + }, + "braces": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/braces/-/braces-2.3.2.tgz", + "integrity": "sha512-aNdbnj9P8PjdXU4ybaWLK2IF3jc/EoDYbC7AazW6to3TRsfXxscC9UXOB5iDiEQrkyIbWp2SLQda4+QAa7nc3w==", + "dev": true, + "requires": { + "arr-flatten": "^1.1.0", + "array-unique": "^0.3.2", + "extend-shallow": "^2.0.1", + "fill-range": "^4.0.0", + "isobject": "^3.0.1", + "repeat-element": "^1.1.2", + "snapdragon": "^0.8.1", + "snapdragon-node": "^2.0.1", + "split-string": "^3.0.2", + "to-regex": "^3.0.1" + }, + "dependencies": { + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + } + } + }, + "fill-range": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-4.0.0.tgz", + "integrity": "sha1-1USBHUKPmOsGpj3EAtJAPDKMOPc=", + "dev": true, + "requires": { + "extend-shallow": "^2.0.1", + "is-number": "^3.0.0", + "repeat-string": "^1.6.1", + "to-regex-range": "^2.1.0" + }, + "dependencies": { + "extend-shallow": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", + "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", + "dev": true, + "requires": { + "is-extendable": "^0.1.0" + } + } + } + }, + "is-number": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-3.0.0.tgz", + "integrity": "sha1-JP1iAaR4LPUFYcgQJ2r8fRLXEZU=", + "dev": true, + "requires": { + "kind-of": "^3.0.2" + }, + "dependencies": { + "kind-of": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", + "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", + "dev": true, + "requires": { + "is-buffer": "^1.1.5" + } + } + } + }, + "micromatch": { + "version": "3.1.10", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-3.1.10.tgz", + "integrity": "sha512-MWikgl9n9M3w+bpsY3He8L+w9eF9338xRl8IAO5viDizwSzziFEyUzo2xrrloB64ADbTf8uA8vRqqttDTOmccg==", + "dev": true, + "requires": { + "arr-diff": "^4.0.0", + "array-unique": "^0.3.2", + "braces": "^2.3.1", + "define-property": "^2.0.2", + "extend-shallow": "^3.0.2", + "extglob": "^2.0.4", + "fragment-cache": "^0.2.1", + "kind-of": "^6.0.2", + "nanomatch": "^1.2.9", + "object.pick": "^1.3.0", + "regex-not": "^1.0.0", + "snapdragon": "^0.8.1", + "to-regex": "^3.0.2" + } + }, + "normalize-path": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-2.1.1.tgz", + "integrity": "sha1-GrKLVW4Zg2Oowab35vogE3/mrtk=", + "dev": true, + "requires": { + "remove-trailing-separator": "^1.0.1" + } + }, + "to-regex-range": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-2.1.1.tgz", + "integrity": "sha1-fIDBe53+vlmeJzZ+DU3VWQFB2zg=", + "dev": true, + "requires": { + "is-number": "^3.0.0", + "repeat-string": "^1.6.1" + } + } } }, - "sax": { - "version": "1.2.4", - "resolved": "https://registry.npmjs.org/sax/-/sax-1.2.4.tgz", - "integrity": "sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw==", - "dev": true + "saxes": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/saxes/-/saxes-5.0.1.tgz", + "integrity": "sha512-5LBh1Tls8c9xgGjw3QrMwETmTMVk0oFgvrFSvWx62llR2hcEInrKNZ2GZCCuuy2lvWrdl5jhbpeqc5hRYKFOcw==", + "dev": true, + "requires": { + "xmlchars": "^2.2.0" + } }, "semver": { - "version": "5.7.0", - "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.0.tgz", - "integrity": "sha512-Ya52jSX2u7QKghxeoFGpLwCtGlt7j0oY9DYb5apt9nPlJ42ID+ulTXESnt/qAQcoSERyZ5sl3LDIOw0nAn/5DA==", + "version": "5.7.1", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz", + "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==", "dev": true }, "set-blocking": { @@ -4133,24 +3899,25 @@ "version": "0.1.1", "resolved": "https://registry.npmjs.org/shellwords/-/shellwords-0.1.1.tgz", "integrity": "sha512-vFwSUfQvqybiICwZY5+DAWIPLKsWO31Q91JSKl3UYv+K5c2QRPzn0qzec6QPu1Qc9eHYItiP3NdJqNVqetYAww==", - "dev": true + "dev": true, + "optional": true }, "signal-exit": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.2.tgz", - "integrity": "sha1-tf3AjxKH6hF4Yo5BXiUTK3NkbG0=", + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.3.tgz", + "integrity": "sha512-VUJ49FC8U1OxwZLxIbTTrDvLnf/6TDgxZcK8wxR8zs13xpx7xbG60ndBlhNrFi2EMuFRoeDoJO7wthSLq42EjA==", "dev": true }, "sisteransi": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.2.tgz", - "integrity": "sha512-ZcYcZcT69nSLAR2oLN2JwNmLkJEKGooFMCdvOkFrToUt/WfcRWqhIg4P4KwY4dmLbuyXIx4o4YmPsvMRJYJd/w==", + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz", + "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==", "dev": true }, "slash": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/slash/-/slash-2.0.0.tgz", - "integrity": "sha512-ZYKh3Wh2z1PpEXWr0MpSBZ0V6mZHAQfYevttO11c51CaWjGTaadiKZ+wVt1PbMlDV5qhMFslpZCemhwOK7C89A==", + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/slash/-/slash-3.0.0.tgz", + "integrity": "sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q==", "dev": true }, "snapdragon": { @@ -4169,6 +3936,15 @@ "use": "^3.1.0" }, "dependencies": { + "debug": { + "version": "2.6.9", + "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", + "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, "define-property": { "version": "0.2.5", "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", @@ -4187,6 +3963,12 @@ "is-extendable": "^0.1.0" } }, + "ms": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", + "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", + "dev": true + }, "source-map": { "version": "0.5.7", "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.7.tgz", @@ -4273,12 +4055,12 @@ "dev": true }, "source-map-resolve": { - "version": "0.5.2", - "resolved": "https://registry.npmjs.org/source-map-resolve/-/source-map-resolve-0.5.2.tgz", - "integrity": "sha512-MjqsvNwyz1s0k81Goz/9vRBe9SZdB09Bdw+/zYyO+3CuPk6fouTaxscHkgtE8jKvf01kVfl8riHzERQ/kefaSA==", + "version": "0.5.3", + "resolved": "https://registry.npmjs.org/source-map-resolve/-/source-map-resolve-0.5.3.tgz", + "integrity": "sha512-Htz+RnsXWk5+P2slx5Jh3Q66vhQj1Cllm0zvnaY98+NFx+Dv2CF/f5O/t8x+KaNdrdIAsruNzoh/KpialbqAnw==", "dev": true, "requires": { - "atob": "^2.1.1", + "atob": "^2.1.2", "decode-uri-component": "^0.2.0", "resolve-url": "^0.2.1", "source-map-url": "^0.4.0", @@ -4286,9 +4068,9 @@ } }, "source-map-support": { - "version": "0.5.12", - "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.12.tgz", - "integrity": "sha512-4h2Pbvyy15EE02G+JOZpUCmqWJuqrs+sEkzewTm++BPi7Hvn/HwcqLAcNxYAyI0x13CpPPn+kMjl+hplXMHITQ==", + "version": "0.5.19", + "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.19.tgz", + "integrity": "sha512-Wonm7zOCIJzBGQdB+thsPar0kYuCIzYvxZwlBa87yi/Mdjv7Tip2cyVbLj5o0cFPN4EVkuTwb3GDDyUx2DGnGw==", "dev": true, "requires": { "buffer-from": "^1.0.0", @@ -4302,9 +4084,9 @@ "dev": true }, "spdx-correct": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/spdx-correct/-/spdx-correct-3.1.0.tgz", - "integrity": "sha512-lr2EZCctC2BNR7j7WzJ2FpDznxky1sjfxvvYEyzxNyb6lZXHODmEoJeFu4JupYlkfha1KZpJyoqiJ7pgA1qq8Q==", + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/spdx-correct/-/spdx-correct-3.1.1.tgz", + "integrity": "sha512-cOYcUWwhCuHCXi49RhFRCyJEK3iPj1Ziz9DpViV3tbZOwXD49QzIN3MpOLJNxh2qwq2lJJZaKMVw9qNi4jTC0w==", "dev": true, "requires": { "spdx-expression-parse": "^3.0.0", @@ -4312,15 +4094,15 @@ } }, "spdx-exceptions": { - "version": "2.2.0", - "resolved": "https://registry.npmjs.org/spdx-exceptions/-/spdx-exceptions-2.2.0.tgz", - "integrity": "sha512-2XQACfElKi9SlVb1CYadKDXvoajPgBVPn/gOQLrTvHdElaVhr7ZEbqJaRnJLVNeaI4cMEAgVCeBMKF6MWRDCRA==", + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/spdx-exceptions/-/spdx-exceptions-2.3.0.tgz", + "integrity": "sha512-/tTrYOC7PPI1nUAgx34hUpqXuyJG+DTHJTnIULG4rDygi4xu/tfgmq1e1cIRwRzwZgo4NLySi+ricLkZkw4i5A==", "dev": true }, "spdx-expression-parse": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/spdx-expression-parse/-/spdx-expression-parse-3.0.0.tgz", - "integrity": "sha512-Yg6D3XpRD4kkOmTpdgbUiEJFKghJH03fiC1OPll5h/0sO6neh2jqRDVHOQ4o/LMea0tgCkbMgea5ip/e+MkWyg==", + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/spdx-expression-parse/-/spdx-expression-parse-3.0.1.tgz", + "integrity": "sha512-cbqHunsQWnJNE6KhVSMsMeH5H/L9EpymbzqTQ3uLwNCLZ1Q481oWaofqH7nO6V07xlXwY6PhQdQ2IedWx/ZK4Q==", "dev": true, "requires": { "spdx-exceptions": "^2.1.0", @@ -4328,9 +4110,9 @@ } }, "spdx-license-ids": { - "version": "3.0.4", - "resolved": "https://registry.npmjs.org/spdx-license-ids/-/spdx-license-ids-3.0.4.tgz", - "integrity": "sha512-7j8LYJLeY/Yb6ACbQ7F76qy5jHkp0U6jgBfJsk97bwWlVUnUWsAgpyaCvo17h0/RQGnQ036tVDomiwoI4pDkQA==", + "version": "3.0.6", + "resolved": "https://registry.npmjs.org/spdx-license-ids/-/spdx-license-ids-3.0.6.tgz", + "integrity": "sha512-+orQK83kyMva3WyPf59k1+Y525csj5JejicWut55zeTWANuN17qSiSLUXWtzHeNWORSvT7GLDJ/E/XiIWoXBTw==", "dev": true }, "split-string": { @@ -4342,6 +4124,12 @@ "extend-shallow": "^3.0.0" } }, + "sprintf-js": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", + "integrity": "sha1-BOaSb2YolTVPPdAVIDYzuFcpfiw=", + "dev": true + }, "sshpk": { "version": "1.16.1", "resolved": "https://registry.npmjs.org/sshpk/-/sshpk-1.16.1.tgz", @@ -4360,10 +4148,21 @@ } }, "stack-utils": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/stack-utils/-/stack-utils-1.0.2.tgz", - "integrity": "sha512-MTX+MeG5U994cazkjd/9KNAapsHnibjMLnfXodlkXw76JEea0UiNzrqidzo1emMwk7w5Qhc9jd4Bn9TBb1MFwA==", - "dev": true + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/stack-utils/-/stack-utils-2.0.2.tgz", + "integrity": "sha512-0H7QK2ECz3fyZMzQ8rH0j2ykpfbnd20BFtfg/SqVC2+sCTtcw0aDTGB7dk+de4U4uUeuz6nOtJcrkFFLG1B0Rg==", + "dev": true, + "requires": { + "escape-string-regexp": "^2.0.0" + }, + "dependencies": { + "escape-string-regexp": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-2.0.0.tgz", + "integrity": "sha512-UpzcLCXolUWcNu5HtVMHYdXJjArjsF9C0aNnquZYY4uW/Vu0miy5YoWvbV345HauVvcAUnpRuhMMcqTcGOY2+w==", + "dev": true + } + } }, "static-extend": { "version": "0.1.2", @@ -4393,81 +4192,39 @@ "dev": true }, "string-length": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/string-length/-/string-length-2.0.0.tgz", - "integrity": "sha1-1A27aGo6zpYMHP/KVivyxF+DY+0=", + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/string-length/-/string-length-4.0.1.tgz", + "integrity": "sha512-PKyXUd0LK0ePjSOnWn34V2uD6acUWev9uy0Ft05k0E8xRW+SKcA0F7eMr7h5xlzfn+4O3N+55rduYyet3Jk+jw==", "dev": true, "requires": { - "astral-regex": "^1.0.0", - "strip-ansi": "^4.0.0" - }, - "dependencies": { - "ansi-regex": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz", - "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg=", - "dev": true - }, - "strip-ansi": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz", - "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=", - "dev": true, - "requires": { - "ansi-regex": "^3.0.0" - } - } + "char-regex": "^1.0.2", + "strip-ansi": "^6.0.0" } }, "string-width": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz", - "integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVaTjAqvVwdfeZ7w7aCvJD7ugkw==", - "dev": true, - "requires": { - "is-fullwidth-code-point": "^2.0.0", - "strip-ansi": "^4.0.0" - }, - "dependencies": { - "ansi-regex": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz", - "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg=", - "dev": true - }, - "strip-ansi": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz", - "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=", - "dev": true, - "requires": { - "ansi-regex": "^3.0.0" - } - } - } - }, - "string_decoder": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz", - "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==", + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz", + "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==", "dev": true, "requires": { - "safe-buffer": "~5.1.0" + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.0" } }, "strip-ansi": { - "version": "5.2.0", - "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz", - "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==", + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz", + "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==", "dev": true, "requires": { - "ansi-regex": "^4.1.0" + "ansi-regex": "^5.0.0" } }, "strip-bom": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/strip-bom/-/strip-bom-3.0.0.tgz", - "integrity": "sha1-IzTBjpx1n3vdVv3vfprj1YjmjtM=", + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/strip-bom/-/strip-bom-4.0.0.tgz", + "integrity": "sha512-3xurFv5tEgii33Zi8Jtp55wEIILR9eh34FAW00PZf+JnSsTmV/ioewSgQl97JHvgjoRGwPShsWm+IdrxB35d0w==", "dev": true }, "strip-eof": { @@ -4476,13 +4233,29 @@ "integrity": "sha1-u0P/VZim6wXYm1n80SnJgzE2Br8=", "dev": true }, + "strip-final-newline": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-2.0.0.tgz", + "integrity": "sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA==", + "dev": true + }, "supports-color": { - "version": "5.5.0", - "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", - "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "version": "7.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "requires": { + "has-flag": "^4.0.0" + } + }, + "supports-hyperlinks": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/supports-hyperlinks/-/supports-hyperlinks-2.1.0.tgz", + "integrity": "sha512-zoE5/e+dnEijk6ASB6/qrK+oYdm2do1hjoLWrqUC/8WEIW1gbxFcKuBof7sW8ArN6e+AYvsE8HBGiVRWL/F5CA==", "dev": true, "requires": { - "has-flag": "^3.0.0" + "has-flag": "^4.0.0", + "supports-color": "^7.0.0" } }, "symbol-tree": { @@ -4491,22 +4264,31 @@ "integrity": "sha512-9QNk5KwDF+Bvz+PyObkmSYjI5ksVUYtjW7AU22r2NKcfLJcXp96hkDWU3+XndOsUb+AQ9QhfzfCT2O+CNWT5Tw==", "dev": true }, + "terminal-link": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/terminal-link/-/terminal-link-2.1.1.tgz", + "integrity": "sha512-un0FmiRUQNr5PJqy9kP7c40F5BOfpGlYTrxonDChEZB7pzZxRNp/bt+ymiy9/npwXya9KH99nJ/GXFIiUkYGFQ==", + "dev": true, + "requires": { + "ansi-escapes": "^4.2.1", + "supports-hyperlinks": "^2.0.0" + } + }, "test-exclude": { - "version": "5.2.3", - "resolved": "https://registry.npmjs.org/test-exclude/-/test-exclude-5.2.3.tgz", - "integrity": "sha512-M+oxtseCFO3EDtAaGH7iiej3CBkzXqFMbzqYAACdzKui4eZA+pq3tZEwChvOdNfa7xxy8BfbmgJSIr43cC/+2g==", + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/test-exclude/-/test-exclude-6.0.0.tgz", + "integrity": "sha512-cAGWPIyOHU6zlmg88jwm7VRyXnMN7iV68OGAbYDk/Mh/xC/pzVPlQtY6ngoIH/5/tciuhGfvESU8GrHrcxD56w==", "dev": true, "requires": { - "glob": "^7.1.3", - "minimatch": "^3.0.4", - "read-pkg-up": "^4.0.0", - "require-main-filename": "^2.0.0" + "@istanbuljs/schema": "^0.1.2", + "glob": "^7.1.4", + "minimatch": "^3.0.4" } }, "throat": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/throat/-/throat-4.1.0.tgz", - "integrity": "sha1-iQN8vJLFarGJJua6TLsgDhVnKmo=", + "version": "5.0.0", + "resolved": "https://registry.npmjs.org/throat/-/throat-5.0.0.tgz", + "integrity": "sha512-fcwX4mndzpLQKBS1DVYhGAcYaYt7vsHNIvQV+WXMvnow5cgjPphq5CaayLaGsjRdSCKZFNGt7/GYAuXaNOiYCA==", "dev": true }, "tmpl": { @@ -4554,40 +4336,34 @@ } }, "to-regex-range": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-2.1.1.tgz", - "integrity": "sha1-fIDBe53+vlmeJzZ+DU3VWQFB2zg=", + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", "dev": true, "requires": { - "is-number": "^3.0.0", - "repeat-string": "^1.6.1" + "is-number": "^7.0.0" } }, "tough-cookie": { - "version": "2.5.0", - "resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.5.0.tgz", - "integrity": "sha512-nlLsUzgm1kfLXSXfRZMc1KLAugd4hqJHDTvc2hDIwS3mZAfMEuMbc03SujMF+GEcpaX/qboeycw6iO8JwVv2+g==", + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/tough-cookie/-/tough-cookie-3.0.1.tgz", + "integrity": "sha512-yQyJ0u4pZsv9D4clxO69OEjLWYw+jbgspjTue4lTQZLfV0c5l1VmK2y1JK8E9ahdpltPOaAThPcp5nKPUgSnsg==", "dev": true, "requires": { + "ip-regex": "^2.1.0", "psl": "^1.1.28", "punycode": "^2.1.1" } }, "tr46": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/tr46/-/tr46-1.0.1.tgz", - "integrity": "sha1-qLE/1r/SSJUZZ0zN5VujaTtwbQk=", + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/tr46/-/tr46-2.0.2.tgz", + "integrity": "sha512-3n1qG+/5kg+jrbTzwAykB5yRYtQCTqOGKq5U5PE3b0a1/mzo6snDhjGS0zJVJunO0NrT3Dg1MLy5TjWP/UJppg==", "dev": true, "requires": { - "punycode": "^2.1.0" + "punycode": "^2.1.1" } }, - "trim-right": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/trim-right/-/trim-right-1.0.1.tgz", - "integrity": "sha1-yy4SAwZ+DI3h9hQJS5/kVwTqYAM=", - "dev": true - }, "tunnel-agent": { "version": "0.6.0", "resolved": "https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.6.0.tgz", @@ -4612,15 +4388,25 @@ "prelude-ls": "~1.1.2" } }, - "uglify-js": { - "version": "3.6.0", - "resolved": "https://registry.npmjs.org/uglify-js/-/uglify-js-3.6.0.tgz", - "integrity": "sha512-W+jrUHJr3DXKhrsS7NUVxn3zqMOFn0hL/Ei6v0anCIMoKC93TjcflTagwIHLW7SfMFfiQuktQyFVCFHGUE0+yg==", + "type-detect": { + "version": "4.0.8", + "resolved": "https://registry.npmjs.org/type-detect/-/type-detect-4.0.8.tgz", + "integrity": "sha512-0fr/mIH1dlO+x7TlcMy+bIDqKPsw/70tVyeHW787goQjhmqaZe10uwLujubK9q9Lg6Fiho1KUKDYz0Z7k7g5/g==", + "dev": true + }, + "type-fest": { + "version": "0.8.1", + "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.8.1.tgz", + "integrity": "sha512-4dbzIzqvjtgiM5rw1k5rEHtBANKmdudhGyBEajN01fEyhaAIhsoKNy6y7+IN93IfpFtwY9iqi7kD+xwKhQsNJA==", + "dev": true + }, + "typedarray-to-buffer": { + "version": "3.1.5", + "resolved": "https://registry.npmjs.org/typedarray-to-buffer/-/typedarray-to-buffer-3.1.5.tgz", + "integrity": "sha512-zdu8XMNEDepKKR+XYOXAVPtWui0ly0NtohUscw+UmaHiAWT8hrV1rr//H6V+0DvJ3OQ19S979M0laLfX8rm82Q==", "dev": true, - "optional": true, "requires": { - "commander": "~2.20.0", - "source-map": "~0.6.1" + "is-typedarray": "^1.0.0" } }, "union-value": { @@ -4676,9 +4462,9 @@ } }, "uri-js": { - "version": "4.2.2", - "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.2.2.tgz", - "integrity": "sha512-KY9Frmirql91X2Qgjry0Wd4Y+YTdrdZheS8TFwvkbLWf/G5KNJDCh6pKL5OZctEW4+0Baa5idK2ZQuELRwPznQ==", + "version": "4.4.0", + "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.0.tgz", + "integrity": "sha512-B0yRTzYdUCCn9n+F4+Gh4yIDtMQcaJsmYBDsTSG8g/OejKBodLQ2IHfN3bM7jUsRXndopT7OIXWdYqc1fjmV6g==", "dev": true, "requires": { "punycode": "^2.1.0" @@ -4696,28 +4482,32 @@ "integrity": "sha512-cwESVXlO3url9YWlFW/TA9cshCEhtu7IKJ/p5soJ/gGpj7vbvFrAY/eIioQ6Dw23KjZhYgiIo8HOs1nQ2vr/oQ==", "dev": true }, - "util-deprecate": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", - "integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8=", - "dev": true + "uuid": { + "version": "8.3.1", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.1.tgz", + "integrity": "sha512-FOmRr+FmWEIG8uhZv6C2bTgEVXsHk08kE7mPlrBbEe+c3r9pjceVPgupIfNIhc4yx55H69OXANrUaSuu9eInKg==", + "dev": true, + "optional": true }, - "util.promisify": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/util.promisify/-/util.promisify-1.0.0.tgz", - "integrity": "sha512-i+6qA2MPhvoKLuxnJNpXAGhg7HphQOSUq2LKMZD0m15EiskXUkMvKdF4Uui0WYeCUGea+o2cw/ZuwehtfsrNkA==", + "v8-to-istanbul": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/v8-to-istanbul/-/v8-to-istanbul-6.0.1.tgz", + "integrity": "sha512-PzM1WlqquhBvsV+Gco6WSFeg1AGdD53ccMRkFeyHRE/KRZaVacPOmQYP3EeVgDBtKD2BJ8kgynBQ5OtKiHCH+w==", "dev": true, "requires": { - "define-properties": "^1.1.2", - "object.getownpropertydescriptors": "^2.0.3" + "@types/istanbul-lib-coverage": "^2.0.1", + "convert-source-map": "^1.6.0", + "source-map": "^0.7.3" + }, + "dependencies": { + "source-map": { + "version": "0.7.3", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.7.3.tgz", + "integrity": "sha512-CkCj6giN3S+n9qrYiBTX5gystlENnRW5jZeNLHpe6aue+SrHcG5VYwujhW9s4dY31mEGsxBDrHR6oI69fTXsaQ==", + "dev": true + } } }, - "uuid": { - "version": "3.3.2", - "resolved": "https://registry.npmjs.org/uuid/-/uuid-3.3.2.tgz", - "integrity": "sha512-yXJmeNaw3DnnKAOKJE51sL/ZaYfWJRl1pK9dr19YFCu0ObS231AB1/LbqTKRAQ5kw8A90rA6fr4riOUpTZvQZA==", - "dev": true - }, "validate-npm-package-license": { "version": "3.0.4", "resolved": "https://registry.npmjs.org/validate-npm-package-license/-/validate-npm-package-license-3.0.4.tgz", @@ -4740,12 +4530,21 @@ } }, "w3c-hr-time": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/w3c-hr-time/-/w3c-hr-time-1.0.1.tgz", - "integrity": "sha1-gqwr/2PZUOqeMYmlimViX+3xkEU=", + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/w3c-hr-time/-/w3c-hr-time-1.0.2.tgz", + "integrity": "sha512-z8P5DvDNjKDoFIHK7q8r8lackT6l+jo/Ye3HOle7l9nICP9lf1Ci25fy9vHd0JOWewkIFzXIEig3TdKT7JQ5fQ==", + "dev": true, + "requires": { + "browser-process-hrtime": "^1.0.0" + } + }, + "w3c-xmlserializer": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/w3c-xmlserializer/-/w3c-xmlserializer-2.0.0.tgz", + "integrity": "sha512-4tzD0mF8iSiMiNs30BiLO3EpfGLZUT2MSX/G+o7ZywDzliWQ3OPtTZ0PTC3B3ca1UAf4cJMHB+2Bf56EriJuRA==", "dev": true, "requires": { - "browser-process-hrtime": "^0.1.2" + "xml-name-validator": "^3.0.0" } }, "walker": { @@ -4758,9 +4557,9 @@ } }, "webidl-conversions": { - "version": "4.0.2", - "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-4.0.2.tgz", - "integrity": "sha512-YQ+BmxuTgd6UXZW3+ICGfyqRyHXVlD5GtQr5+qjiNW7bF0cqrzX500HVXPBOvgXb5YnzDd+h0zqyv61KUD7+Sg==", + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-6.1.0.tgz", + "integrity": "sha512-qBIvFLGiBpLjfwmYAaHPXsn+ho5xZnGvyGvsarywGNc8VyQJUMHJ8OBKGGrPER0okBeMDaan4mNBlgBROxuI8w==", "dev": true }, "whatwg-encoding": { @@ -4779,14 +4578,14 @@ "dev": true }, "whatwg-url": { - "version": "6.5.0", - "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-6.5.0.tgz", - "integrity": "sha512-rhRZRqx/TLJQWUpQ6bmrt2UV4f0HCQ463yQuONJqC6fO2VoEb1pTYddbe59SkYq87aoM5A3bdhMZiUiVws+fzQ==", + "version": "8.4.0", + "resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-8.4.0.tgz", + "integrity": "sha512-vwTUFf6V4zhcPkWp/4CQPr1TW9Ml6SF4lVyaIMBdJw5i6qUUJ1QWM4Z6YYVkfka0OUIzVo/0aNtGVGk256IKWw==", "dev": true, "requires": { "lodash.sortby": "^4.7.0", - "tr46": "^1.0.1", - "webidl-conversions": "^4.0.2" + "tr46": "^2.0.2", + "webidl-conversions": "^6.1.0" } }, "which": { @@ -4804,57 +4603,21 @@ "integrity": "sha1-2e8H3Od7mQK4o6j6SzHD4/fm6Ho=", "dev": true }, - "wordwrap": { - "version": "0.0.3", - "resolved": "https://registry.npmjs.org/wordwrap/-/wordwrap-0.0.3.tgz", - "integrity": "sha1-o9XabNXAvAAI03I0u68b7WMFkQc=", + "word-wrap": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz", + "integrity": "sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ==", "dev": true }, "wrap-ansi": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-2.1.0.tgz", - "integrity": "sha1-2Pw9KE3QV5T+hJc8rs3Rz4JP3YU=", + "version": "6.2.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-6.2.0.tgz", + "integrity": "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==", "dev": true, "requires": { - "string-width": "^1.0.1", - "strip-ansi": "^3.0.1" - }, - "dependencies": { - "ansi-regex": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.1.1.tgz", - "integrity": "sha1-w7M6te42DYbg5ijwRorn7yfWVN8=", - "dev": true - }, - "is-fullwidth-code-point": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-1.0.0.tgz", - "integrity": "sha1-754xOG8DGn8NZDr4L95QxFfvAMs=", - "dev": true, - "requires": { - "number-is-nan": "^1.0.0" - } - }, - "string-width": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/string-width/-/string-width-1.0.2.tgz", - "integrity": "sha1-EYvfW4zcUaKn5w0hHgfisLmxB9M=", - "dev": true, - "requires": { - "code-point-at": "^1.0.0", - "is-fullwidth-code-point": "^1.0.0", - "strip-ansi": "^3.0.0" - } - }, - "strip-ansi": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-3.0.1.tgz", - "integrity": "sha1-ajhfuIU9lS1f8F0Oiq+UJ43GPc8=", - "dev": true, - "requires": { - "ansi-regex": "^2.0.0" - } - } + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" } }, "wrappy": { @@ -4864,24 +4627,22 @@ "dev": true }, "write-file-atomic": { - "version": "2.4.1", - "resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-2.4.1.tgz", - "integrity": "sha512-TGHFeZEZMnv+gBFRfjAcxL5bPHrsGKtnb4qsFAws7/vlh+QfwAaySIw4AXP9ZskTTh5GWu3FLuJhsWVdiJPGvg==", + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-3.0.3.tgz", + "integrity": "sha512-AvHcyZ5JnSfq3ioSyjrBkH9yW4m7Ayk8/9My/DD9onKeu/94fwrMocemO2QAJFAlnnDN+ZDS+ZjAR5ua1/PV/Q==", "dev": true, "requires": { - "graceful-fs": "^4.1.11", "imurmurhash": "^0.1.4", - "signal-exit": "^3.0.2" + "is-typedarray": "^1.0.0", + "signal-exit": "^3.0.2", + "typedarray-to-buffer": "^3.1.5" } }, "ws": { - "version": "5.2.2", - "resolved": "https://registry.npmjs.org/ws/-/ws-5.2.2.tgz", - "integrity": "sha512-jaHFD6PFv6UgoIVda6qZllptQsMlDEJkTQcybzzXDYM1XO9Y8em691FGMPmM46WGyLU4z9KMgQN+qrux/nhlHA==", - "dev": true, - "requires": { - "async-limiter": "~1.0.0" - } + "version": "7.3.1", + "resolved": "https://registry.npmjs.org/ws/-/ws-7.3.1.tgz", + "integrity": "sha512-D3RuNkynyHmEJIpD2qrgVkc9DQ23OrN/moAwZX4L8DfvszsJxpjQuUq3LMx6HoYji9fbIOBY18XWBsAux1ZZUA==", + "dev": true }, "xml-name-validator": { "version": "3.0.0", @@ -4889,6 +4650,12 @@ "integrity": "sha512-A5CUptxDsvxKJEU3yO6DuWBSJz/qizqzJKOMIfUJHETbBw/sFaDxgd6fxm1ewUaM0jZ444Fc5vC5ROYurg/4Pw==", "dev": true }, + "xmlchars": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/xmlchars/-/xmlchars-2.2.0.tgz", + "integrity": "sha512-JZnDKK8B0RCDw84FNdDAIpZK+JuJw+s7Lz8nksI7SIuU3UXJJslUthsi+uWBUYOwPFwW7W7PRLRfUKpxjtjFCw==", + "dev": true + }, "y18n": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz", @@ -4896,37 +4663,28 @@ "dev": true }, "yargs": { - "version": "12.0.5", - "resolved": "https://registry.npmjs.org/yargs/-/yargs-12.0.5.tgz", - "integrity": "sha512-Lhz8TLaYnxq/2ObqHDql8dX8CJi97oHxrjUcYtzKbbykPtVW9WB+poxI+NM2UIzsMgNCZTIf0AQwsjK5yMAqZw==", + "version": "15.4.1", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-15.4.1.tgz", + "integrity": "sha512-aePbxDmcYW++PaqBsJ+HYUFwCdv4LVvdnhBy78E57PIor8/OVvhMrADFFEDh8DHDFRv/O9i3lPhsENjO7QX0+A==", "dev": true, "requires": { - "cliui": "^4.0.0", + "cliui": "^6.0.0", "decamelize": "^1.2.0", - "find-up": "^3.0.0", - "get-caller-file": "^1.0.1", - "os-locale": "^3.0.0", + "find-up": "^4.1.0", + "get-caller-file": "^2.0.1", "require-directory": "^2.1.1", - "require-main-filename": "^1.0.1", + "require-main-filename": "^2.0.0", "set-blocking": "^2.0.0", - "string-width": "^2.0.0", + "string-width": "^4.2.0", "which-module": "^2.0.0", - "y18n": "^3.2.1 || ^4.0.0", - "yargs-parser": "^11.1.1" - }, - "dependencies": { - "require-main-filename": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-1.0.1.tgz", - "integrity": "sha1-l/cXtp1IeE9fUmpsWqj/3aBVpNE=", - "dev": true - } + "y18n": "^4.0.0", + "yargs-parser": "^18.1.2" } }, "yargs-parser": { - "version": "11.1.1", - "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz", - "integrity": "sha512-C6kB/WJDiaxONLJQnF8ccx9SEeoTTLek8RVbaOIsrAUS8VrBEXfmeSnCZxygc+XC2sNMBIwOOnfcxiynjHsVSQ==", + "version": "18.1.3", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-18.1.3.tgz", + "integrity": "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ==", "dev": true, "requires": { "camelcase": "^5.0.0", diff --git a/package.json b/package.json index 84039bb9e5..e80152ce22 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "jspsych", - "version": "6.1.0", + "version": "6.3.1", "description": "Behavioral experiments in a browser", "main": "jspsych.js", "directories": { @@ -11,16 +11,16 @@ }, "repository": { "type": "git", - "url": "git+https://github.com/jodeleeuw/jsPsych.git" + "url": "git+https://github.com/jspsych/jsPsych.git" }, "author": "Josh de Leeuw", "license": "MIT", "bugs": { - "url": "https://github.com/jodeleeuw/jsPsych/issues" + "url": "https://github.com/jspsych/jsPsych/issues" }, - "homepage": "https://github.com/jodeleeuw/jsPsych#readme", + "homepage": "https://github.com/jspsych/jsPsych#readme", "devDependencies": { - "jest": "^24.8" + "jest": "^26.6" }, "jest": { "resetModules": true, diff --git a/plugins/jspsych-animation.js b/plugins/jspsych-animation.js index 00fbf669d1..ce56c9d6de 100644 --- a/plugins/jspsych-animation.js +++ b/plugins/jspsych-animation.js @@ -41,7 +41,7 @@ jsPsych.plugins.animation = (function() { description: 'Number of times to show entire sequence.' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Choices', default: jsPsych.ALL_KEYS, array: true, @@ -52,6 +52,13 @@ jsPsych.plugins.animation = (function() { pretty_name: 'Prompt', default: null, description: 'Any content here will be displayed below stimulus.' + }, + render_on_canvas: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Render on canvas', + default: true, + description: 'If true, the images will be drawn onto a canvas element (prevents blank screen between consecutive images in some browsers).'+ + 'If false, the image will be shown via an img element.' } } } @@ -66,9 +73,27 @@ jsPsych.plugins.animation = (function() { var responses = []; var current_stim = ""; + if (trial.render_on_canvas) { + // first clear the display element (because the render_on_canvas method appends to display_element instead of overwriting it with .innerHTML) + if (display_element.hasChildNodes()) { + // can't loop through child list because the list will be modified by .removeChild() + while (display_element.firstChild) { + display_element.removeChild(display_element.firstChild); + } + } + var canvas = document.createElement("canvas"); + canvas.id = "jspsych-animation-image"; + canvas.style.margin = 0; + canvas.style.padding = 0; + display_element.insertBefore(canvas, null); + var ctx = canvas.getContext("2d"); + } + var animate_interval = setInterval(function() { var showImage = true; - display_element.innerHTML = ''; // clear everything + if (!trial.render_on_canvas) { + display_element.innerHTML = ''; // clear everything + } animate_frame++; if (animate_frame == trial.stimuli.length) { animate_frame = 0; @@ -85,9 +110,23 @@ jsPsych.plugins.animation = (function() { }, interval_time); function show_next_frame() { - // show image - display_element.innerHTML = ''; - + if (trial.render_on_canvas) { + display_element.querySelector('#jspsych-animation-image').style.visibility = 'visible'; + var img = new Image(); + img.src = trial.stimuli[animate_frame]; + canvas.height = img.naturalHeight; + canvas.width = img.naturalWidth; + ctx.drawImage(img,0,0); + if (trial.prompt !== null & animate_frame == 0 & reps == 0) { + display_element.insertAdjacentHTML('beforeend', trial.prompt); + } + } else { + // show image + display_element.innerHTML = ''; + if (trial.prompt !== null) { + display_element.innerHTML += trial.prompt; + } + } current_stim = trial.stimuli[animate_frame]; // record when image was shown @@ -96,10 +135,6 @@ jsPsych.plugins.animation = (function() { "time": performance.now() - startTime }); - if (trial.prompt !== null) { - display_element.innerHTML += trial.prompt; - } - if (trial.frame_isi > 0) { jsPsych.pluginAPI.setTimeout(function() { display_element.querySelector('#jspsych-animation-image').style.visibility = 'hidden'; @@ -142,8 +177,8 @@ jsPsych.plugins.animation = (function() { jsPsych.pluginAPI.cancelKeyboardResponse(response_listener); var trial_data = { - "animation_sequence": JSON.stringify(animation_sequence), - "responses": JSON.stringify(responses) + animation_sequence: animation_sequence, + response: responses }; jsPsych.finishTrial(trial_data); diff --git a/plugins/jspsych-audio-button-response.js b/plugins/jspsych-audio-button-response.js index 6341e2cbe0..a7146a2093 100644 --- a/plugins/jspsych-audio-button-response.js +++ b/plugins/jspsych-audio-button-response.js @@ -8,28 +8,28 @@ * **/ -jsPsych.plugins["audio-button-response"] = (function() { - var plugin = {}; +jsPsych.plugins["audio-button-response"] = (function () { + var plugin = {}; - jsPsych.pluginAPI.registerPreload('audio-button-response', 'stimulus', 'audio'); + jsPsych.pluginAPI.registerPreload('audio-button-response', 'stimulus', 'audio'); - plugin.info = { - name: 'audio-button-response', - description: '', - parameters: { - stimulus: { - type: jsPsych.plugins.parameterType.AUDIO, + plugin.info = { + name: 'audio-button-response', + description: '', + parameters: { + stimulus: { + type: jsPsych.plugins.parameterType.AUDIO, pretty_name: 'Stimulus', - default: undefined, - description: 'The audio to be played.' - }, - choices: { - type: jsPsych.plugins.parameterType.STRING, + default: undefined, + description: 'The audio to be played.' + }, + choices: { + type: jsPsych.plugins.parameterType.STRING, pretty_name: 'Choices', - default: undefined, - array: true, - description: 'The button labels.' - }, + default: undefined, + array: true, + description: 'The button labels.' + }, button_html: { type: jsPsych.plugins.parameterType.HTML_STRING, pretty_name: 'Button HTML', @@ -73,116 +73,158 @@ jsPsych.plugins["audio-button-response"] = (function() { default: false, description: 'If true, then the trial will end as soon as the audio file finishes playing.' }, + response_allowed_while_playing: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Response allowed while playing', + default: true, + description: 'If true, then responses are allowed while the audio is playing. ' + + 'If false, then the audio must finish playing before a response is accepted.' + } } } - plugin.trial = function(display_element, trial) { + plugin.trial = function (display_element, trial) { // setup stimulus var context = jsPsych.pluginAPI.audioContext(); - if(context !== null){ - var source = context.createBufferSource(); - source.buffer = jsPsych.pluginAPI.getAudioBuffer(trial.stimulus); - source.connect(context.destination); - } else { - var audio = jsPsych.pluginAPI.getAudioBuffer(trial.stimulus); - audio.currentTime = 0; - } + var audio; - // set up end event if trial needs it - if(trial.trial_ends_after_audio){ - if(context !== null){ - source.onended = function() { - end_trial(); + // store response + var response = { + rt: null, + button: null + }; + + // record webaudio context start time + var startTime; + + // load audio file + jsPsych.pluginAPI.getAudioBuffer(trial.stimulus) + .then(function (buffer) { + if (context !== null) { + audio = context.createBufferSource(); + audio.buffer = buffer; + audio.connect(context.destination); + } else { + audio = buffer; + audio.currentTime = 0; } - } else { + setupTrial(); + }) + .catch(function (err) { + console.error(`Failed to load audio file "${trial.stimulus}". Try checking the file path. We recommend using the preload plugin to load audio files.`) + console.error(err) + }); + + function setupTrial() { + // set up end event if trial needs it + if (trial.trial_ends_after_audio) { audio.addEventListener('ended', end_trial); } - } - //display buttons - var buttons = []; - if (Array.isArray(trial.button_html)) { - if (trial.button_html.length == trial.choices.length) { - buttons = trial.button_html; + // enable buttons after audio ends if necessary + if ((!trial.response_allowed_while_playing) & (!trial.trial_ends_after_audio)) { + audio.addEventListener('ended', enable_buttons); + } + + //display buttons + var buttons = []; + if (Array.isArray(trial.button_html)) { + if (trial.button_html.length == trial.choices.length) { + buttons = trial.button_html; + } else { + console.error('Error in audio-button-response plugin. The length of the button_html array does not equal the length of the choices array'); + } } else { - console.error('Error in image-button-response plugin. The length of the button_html array does not equal the length of the choices array'); + for (var i = 0; i < trial.choices.length; i++) { + buttons.push(trial.button_html); + } } - } else { + + var html = '
'; for (var i = 0; i < trial.choices.length; i++) { - buttons.push(trial.button_html); + var str = buttons[i].replace(/%choice%/g, trial.choices[i]); + html += '
' + str + '
'; } - } + html += '
'; - var html = '
'; - for (var i = 0; i < trial.choices.length; i++) { - var str = buttons[i].replace(/%choice%/g, trial.choices[i]); - html += '
'+str+'
'; - } - html += '
'; + //show prompt if there is one + if (trial.prompt !== null) { + html += trial.prompt; + } - //show prompt if there is one - if (trial.prompt !== null) { - html += trial.prompt; - } + display_element.innerHTML = html; - display_element.innerHTML = html; + if (trial.response_allowed_while_playing) { + enable_buttons(); + } else { + disable_buttons(); + } - for (var i = 0; i < trial.choices.length; i++) { - display_element.querySelector('#jspsych-audio-button-response-button-' + i).addEventListener('click', function(e){ - var choice = e.currentTarget.getAttribute('data-choice'); // don't use dataset for jsdom compatibility - after_response(choice); - }); + // start time + startTime = performance.now(); + + // start audio + if (context !== null) { + startTime = context.currentTime; + audio.start(startTime); + } else { + audio.play(); + } + + // end trial if time limit is set + if (trial.trial_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + end_trial(); + }, trial.trial_duration); + } } - // store response - var response = { - rt: null, - button: null - }; + // function to handle responses by the subject function after_response(choice) { // measure rt - var end_time = performance.now(); - var rt = end_time - start_time; - response.button = choice; + var endTime = performance.now(); + var rt = endTime - startTime; + if (context !== null) { + endTime = context.currentTime; + rt = Math.round((endTime - startTime) * 1000); + } + response.button = parseInt(choice); response.rt = rt; // disable all the buttons after a response - var btns = document.querySelectorAll('.jspsych-audio-button-response-button button'); - for(var i=0; i'; - html += '
' - for(var j=0; j < trial.labels.length; j++){ - var width = 100/(trial.labels.length-1); - var left_offset = (j * (100 /(trial.labels.length - 1))) - (width/2); - html += '
'; - html += ''+trial.labels[j]+''; - html += '
' - } - html += '
'; - html += ''; - html += ''; + // enable slider after audio ends if necessary + if ((!trial.response_allowed_while_playing) & (!trial.trial_ends_after_audio)) { - if (trial.prompt !== null){ - html += trial.prompt; - } + audio.addEventListener('ended', enable_slider); - // add submit button - html += ''; + } - display_element.innerHTML = html; + var html = '
'; + html += '
'; + html += ''; + html += '' + trial.labels[j] + ''; + html += '
' + } + html += '
'; + html += ''; + html += ''; - var response = { - rt: null, - response: null - }; + if (trial.prompt !== null) { + html += trial.prompt; + } - if(trial.require_movement){ - display_element.querySelector('#jspsych-audio-slider-response-response').addEventListener('change', function(){ - display_element.querySelector('#jspsych-audio-slider-response-next').disabled = false; - }) - } + // add submit button + var next_disabled_attribute = ""; + if (trial.require_movement | !trial.response_allowed_while_playing) { + next_disabled_attribute = "disabled"; + } + html += ''; - display_element.querySelector('#jspsych-audio-slider-response-next').addEventListener('click', function() { - // measure response time - var endTime = performance.now(); - var rt = endTime - startTime; - if(context !== null){ - endTime = context.currentTime; - rt = Math.round((endTime - startTime) * 1000); - } - response.rt = rt; - response.response = display_element.querySelector('#jspsych-audio-slider-response-response').value; - - if(trial.response_ends_trial){ - end_trial(); - } else { + display_element.innerHTML = html; + + response = { + rt: null, + response: null + }; + + if (!trial.response_allowed_while_playing) { + display_element.querySelector('#jspsych-audio-slider-response-response').disabled = true; display_element.querySelector('#jspsych-audio-slider-response-next').disabled = true; } - }); + if (trial.require_movement) { + display_element.querySelector('#jspsych-audio-slider-response-response').addEventListener('click', function () { + display_element.querySelector('#jspsych-audio-slider-response-next').disabled = false; + }); + } - function end_trial(){ + display_element.querySelector('#jspsych-audio-slider-response-next').addEventListener('click', function () { + // measure response time + var endTime = performance.now(); + var rt = endTime - startTime; + if (context !== null) { + endTime = context.currentTime; + rt = Math.round((endTime - startTime) * 1000); + } + response.rt = rt; + response.response = display_element.querySelector('#jspsych-audio-slider-response-response').valueAsNumber; + if (trial.response_ends_trial) { + end_trial(); + } else { + display_element.querySelector('#jspsych-audio-slider-response-next').disabled = true; + } + + }); + + startTime = performance.now(); + // start audio + if (context !== null) { + startTime = context.currentTime; + audio.start(startTime); + } else { + audio.play(); + } + + // end trial if trial_duration is set + if (trial.trial_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + end_trial(); + }, trial.trial_duration); + } + } + + // function to enable slider after audio ends + function enable_slider() { + document.querySelector('#jspsych-audio-slider-response-response').disabled = false; + if (!trial.require_movement) { + document.querySelector('#jspsych-audio-slider-response-next').disabled = false; + } + } + + function end_trial() { + + // kill any remaining setTimeout handlers jsPsych.pluginAPI.clearAllTimeouts(); - if(context !== null){ - source.stop(); - source.onended = function() { } + // stop the audio file if it is playing + // remove end event listeners if they exist + if (context !== null) { + audio.stop(); } else { audio.pause(); - audio.removeEventListener('ended', end_trial); } + audio.removeEventListener('ended', end_trial); + audio.removeEventListener('ended', enable_slider); + + // save data var trialdata = { - "rt": response.rt, - "stimulus": trial.stimulus, - "response": response.response + rt: response.rt, + stimulus: trial.stimulus, + slider_start: trial.slider_start, + response: response.response }; display_element.innerHTML = ''; @@ -190,24 +272,6 @@ jsPsych.plugins['audio-slider-response'] = (function() { // next trial jsPsych.finishTrial(trialdata); } - - var startTime = performance.now(); - // start audio - if(context !== null){ - startTime = context.currentTime; - source.start(startTime); - } else { - audio.play(); - } - - // end trial if trial_duration is set - if (trial.trial_duration !== null) { - jsPsych.pluginAPI.setTimeout(function() { - end_trial(); - }, trial.trial_duration); - } - - }; return plugin; diff --git a/plugins/jspsych-canvas-button-response.js b/plugins/jspsych-canvas-button-response.js new file mode 100644 index 0000000000..c0a0518083 --- /dev/null +++ b/plugins/jspsych-canvas-button-response.js @@ -0,0 +1,199 @@ +/** + * jspsych-canvas-button-response + * Chris Jungerius (modified from Josh de Leeuw) + * + * a jsPsych plugin for displaying a canvas stimulus and getting a button response + * + * documentation: docs.jspsych.org + * + **/ + +jsPsych.plugins["canvas-button-response"] = (function () { + + var plugin = {}; + + plugin.info = { + name: 'canvas-button-response', + description: '', + parameters: { + stimulus: { + type: jsPsych.plugins.parameterType.FUNCTION, + pretty_name: 'Stimulus', + default: undefined, + description: 'The drawing function to apply to the canvas. Should take the canvas object as argument.' + }, + choices: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Choices', + default: undefined, + array: true, + description: 'The labels for the buttons.' + }, + button_html: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Button HTML', + default: '', + array: true, + description: 'The html of the button. Can create own style.' + }, + prompt: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Prompt', + default: null, + description: 'Any content here will be displayed under the button.' + }, + stimulus_duration: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Stimulus duration', + default: null, + description: 'How long to hide the stimulus.' + }, + trial_duration: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Trial duration', + default: null, + description: 'How long to show the trial.' + }, + margin_vertical: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Margin vertical', + default: '0px', + description: 'The vertical margin of the button.' + }, + margin_horizontal: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Margin horizontal', + default: '8px', + description: 'The horizontal margin of the button.' + }, + response_ends_trial: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Response ends trial', + default: true, + description: 'If true, then trial will end when user responds.' + }, + canvas_size: { + type: jsPsych.plugins.parameterType.INT, + array: true, + pretty_name: 'Canvas size', + default: [500, 500], + description: 'Array containing the height (first value) and width (second value) of the canvas element.' + } + + } + } + + plugin.trial = function (display_element, trial) { + + // create canvas + var html = '
' + '' + '
'; + + //display buttons + var buttons = []; + if (Array.isArray(trial.button_html)) { + if (trial.button_html.length == trial.choices.length) { + buttons = trial.button_html; + } else { + console.error('Error in canvas-button-response plugin. The length of the button_html array does not equal the length of the choices array'); + } + } else { + for (var i = 0; i < trial.choices.length; i++) { + buttons.push(trial.button_html); + } + } + html += '
'; + for (var i = 0; i < trial.choices.length; i++) { + var str = buttons[i].replace(/%choice%/g, trial.choices[i]); + html += '
' + str + '
'; + } + html += '
'; + + //show prompt if there is one + if (trial.prompt !== null) { + html += trial.prompt; + } + display_element.innerHTML = html; + + //draw + let c = document.getElementById("jspsych-canvas-stimulus") + trial.stimulus(c) + + // start time + var start_time = performance.now(); + + // add event listeners to buttons + for (var i = 0; i < trial.choices.length; i++) { + display_element.querySelector('#jspsych-canvas-button-response-button-' + i).addEventListener('click', function (e) { + var choice = e.currentTarget.getAttribute('data-choice'); // don't use dataset for jsdom compatibility + after_response(choice); + }); + } + + // store response + var response = { + rt: null, + button: null + }; + + // function to handle responses by the subject + function after_response(choice) { + + // measure rt + var end_time = performance.now(); + var rt = end_time - start_time; + response.button = parseInt(choice); + response.rt = rt; + + // after a valid response, the stimulus will have the CSS class 'responded' + // which can be used to provide visual feedback that a response was recorded + display_element.querySelector('#jspsych-canvas-button-response-stimulus').className += ' responded'; + + // disable all the buttons after a response + var btns = document.querySelectorAll('.jspsych-canvas-button-response-button button'); + for (var i = 0; i < btns.length; i++) { + //btns[i].removeEventListener('click'); + btns[i].setAttribute('disabled', 'disabled'); + } + + if (trial.response_ends_trial) { + end_trial(); + } + }; + + // function to end trial when it is time + function end_trial() { + + // kill any remaining setTimeout handlers + jsPsych.pluginAPI.clearAllTimeouts(); + + // gather the data to store for the trial + var trial_data = { + rt: response.rt, + response: response.button + }; + + // clear the display + display_element.innerHTML = ''; + + // move on to the next trial + jsPsych.finishTrial(trial_data); + }; + + // hide image if timing is set + if (trial.stimulus_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + display_element.querySelector('#jspsych-canvas-button-response-stimulus').style.visibility = 'hidden'; + }, trial.stimulus_duration); + } + + // end trial if time limit is set + if (trial.trial_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + end_trial(); + }, trial.trial_duration); + } + + }; + + return plugin; +})(); diff --git a/plugins/jspsych-canvas-keyboard-response.js b/plugins/jspsych-canvas-keyboard-response.js new file mode 100644 index 0000000000..08833f5068 --- /dev/null +++ b/plugins/jspsych-canvas-keyboard-response.js @@ -0,0 +1,155 @@ +/** + * jspsych-canvas-keyboard-response + * Chris Jungerius (modified from Josh de Leeuw) + * + * a jsPsych plugin for displaying a canvas stimulus and getting a keyboard response + * + * documentation: docs.jspsych.org + * + **/ + + +jsPsych.plugins["canvas-keyboard-response"] = (function () { + + var plugin = {}; + + plugin.info = { + name: 'canvas-keyboard-response', + description: '', + parameters: { + stimulus: { + type: jsPsych.plugins.parameterType.FUNCTION, + pretty_name: 'Stimulus', + default: undefined, + description: 'The drawing function to apply to the canvas. Should take the canvas object as argument.' + }, + choices: { + type: jsPsych.plugins.parameterType.KEY, + array: true, + pretty_name: 'Choices', + default: jsPsych.ALL_KEYS, + description: 'The keys the subject is allowed to press to respond to the stimulus.' + }, + prompt: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Prompt', + default: null, + description: 'Any content here will be displayed below the stimulus.' + }, + stimulus_duration: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Stimulus duration', + default: null, + description: 'How long to hide the stimulus.' + }, + trial_duration: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Trial duration', + default: null, + description: 'How long to show trial before it ends.' + }, + response_ends_trial: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Response ends trial', + default: true, + description: 'If true, trial will end when subject makes a response.' + }, + canvas_size: { + type: jsPsych.plugins.parameterType.INT, + array: true, + pretty_name: 'Canvas size', + default: [500, 500], + description: 'Array containing the height (first value) and width (second value) of the canvas element.' + } + + } + } + + plugin.trial = function (display_element, trial) { + + var new_html = '
' + '' + '
'; + // add prompt + if (trial.prompt !== null) { + new_html += trial.prompt; + } + + // draw + display_element.innerHTML = new_html; + let c = document.getElementById("jspsych-canvas-stimulus") + trial.stimulus(c) + // store response + var response = { + rt: null, + key: null + }; + + // function to end trial when it is time + var end_trial = function () { + + // kill any remaining setTimeout handlers + jsPsych.pluginAPI.clearAllTimeouts(); + + // kill keyboard listeners + if (typeof keyboardListener !== 'undefined') { + jsPsych.pluginAPI.cancelKeyboardResponse(keyboardListener); + } + + // gather the data to store for the trial + var trial_data = { + rt: response.rt, + response: response.key + }; + + // clear the display + display_element.innerHTML = ''; + + // move on to the next trial + jsPsych.finishTrial(trial_data); + }; + + // function to handle responses by the subject + var after_response = function (info) { + + // after a valid response, the stimulus will have the CSS class 'responded' + // which can be used to provide visual feedback that a response was recorded + display_element.querySelector('#jspsych-canvas-keyboard-response-stimulus').className += ' responded'; + + // only record the first response + if (response.key == null) { + response = info; + } + + if (trial.response_ends_trial) { + end_trial(); + } + }; + + // start the response listener + if (trial.choices != jsPsych.NO_KEYS) { + var keyboardListener = jsPsych.pluginAPI.getKeyboardResponse({ + callback_function: after_response, + valid_responses: trial.choices, + rt_method: 'performance', + persist: false, + allow_held_key: false + }); + } + + // hide stimulus if stimulus_duration is set + if (trial.stimulus_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + display_element.querySelector('#jspsych-canvas-keyboard-response-stimulus').style.visibility = 'hidden'; + }, trial.stimulus_duration); + } + + // end trial if trial_duration is set + if (trial.trial_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + end_trial(); + }, trial.trial_duration); + } + + }; + + return plugin; +})(); diff --git a/plugins/jspsych-canvas-slider-response.js b/plugins/jspsych-canvas-slider-response.js new file mode 100644 index 0000000000..4746b7c85b --- /dev/null +++ b/plugins/jspsych-canvas-slider-response.js @@ -0,0 +1,207 @@ +/** + * jspsych-canvas-slider-response + * Chris Jungerius (modified from Josh de Leeuw) + * + * a jsPsych plugin for displaying a canvas stimulus and getting a slider response + * + * documentation: docs.jspsych.org + * + */ + + +jsPsych.plugins['canvas-slider-response'] = (function () { + + var plugin = {}; + + plugin.info = { + name: 'canvas-slider-response', + description: '', + parameters: { + stimulus: { + type: jsPsych.plugins.parameterType.FUNCTION, + pretty_name: 'Stimulus', + default: undefined, + description: 'The drawing function to apply to the canvas. Should take the canvas object as argument.' + }, + min: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Min slider', + default: 0, + description: 'Sets the minimum value of the slider.' + }, + max: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Max slider', + default: 100, + description: 'Sets the maximum value of the slider', + }, + slider_start: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Slider starting value', + default: 50, + description: 'Sets the starting value of the slider', + }, + step: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Step', + default: 1, + description: 'Sets the step of the slider' + }, + labels: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: 'Labels', + default: [], + array: true, + description: 'Labels of the slider.', + }, + slider_width: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Slider width', + default: null, + description: 'Width of the slider in pixels.' + }, + button_label: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Button label', + default: 'Continue', + array: false, + description: 'Label of the button to advance.' + }, + require_movement: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Require movement', + default: false, + description: 'If true, the participant will have to move the slider before continuing.' + }, + prompt: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Prompt', + default: null, + description: 'Any content here will be displayed below the slider.' + }, + stimulus_duration: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Stimulus duration', + default: null, + description: 'How long to hide the stimulus.' + }, + trial_duration: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: 'Trial duration', + default: null, + description: 'How long to show the trial.' + }, + response_ends_trial: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Response ends trial', + default: true, + description: 'If true, trial will end when user makes a response.' + }, + canvas_size: { + type: jsPsych.plugins.parameterType.INT, + array: true, + pretty_name: 'Canvas size', + default: [500, 500], + description: 'Array containing the height (first value) and width (second value) of the canvas element.' + } + + } + } + + plugin.trial = function (display_element, trial) { + + var html = '
'; + html += '
' + '' + '
'; + html += '
'; + html += ''; + html += '
' + for (var j = 0; j < trial.labels.length; j++) { + var width = 100 / (trial.labels.length - 1); + var left_offset = (j * (100 / (trial.labels.length - 1))) - (width / 2); + html += '
'; + html += '' + trial.labels[j] + ''; + html += '
' + } + html += '
'; + html += '
'; + html += '
'; + + if (trial.prompt !== null) { + html += trial.prompt; + } + + // add submit button + html += ''; + + display_element.innerHTML = html; + + // draw + let c = document.getElementById("jspsych-canvas-stimulus") + trial.stimulus(c) + + var response = { + rt: null, + response: null + }; + + if (trial.require_movement) { + display_element.querySelector('#jspsych-canvas-slider-response-response').addEventListener('click', function () { + display_element.querySelector('#jspsych-canvas-slider-response-next').disabled = false; + }) + } + + display_element.querySelector('#jspsych-canvas-slider-response-next').addEventListener('click', function () { + // measure response time + var endTime = performance.now(); + response.rt = endTime - startTime; + response.response = display_element.querySelector('#jspsych-canvas-slider-response-response').valueAsNumber; + + if (trial.response_ends_trial) { + end_trial(); + } else { + display_element.querySelector('#jspsych-canvas-slider-response-next').disabled = true; + } + + }); + + function end_trial() { + + jsPsych.pluginAPI.clearAllTimeouts(); + + // save data + var trialdata = { + rt: response.rt, + response: response.response, + slider_start: trial.slider_start + }; + + display_element.innerHTML = ''; + + // next trial + jsPsych.finishTrial(trialdata); + } + + if (trial.stimulus_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + display_element.querySelector('#jspsych-canvas-slider-response-stimulus').style.visibility = 'hidden'; + }, trial.stimulus_duration); + } + + // end trial if trial_duration is set + if (trial.trial_duration !== null) { + jsPsych.pluginAPI.setTimeout(function () { + end_trial(); + }, trial.trial_duration); + } + + var startTime = performance.now(); + }; + + return plugin; +})(); diff --git a/plugins/jspsych-categorize-animation.js b/plugins/jspsych-categorize-animation.js index f866036d6e..b7af7d46ed 100644 --- a/plugins/jspsych-categorize-animation.js +++ b/plugins/jspsych-categorize-animation.js @@ -23,13 +23,13 @@ jsPsych.plugins["categorize-animation"] = (function() { description: 'Array of paths to image files.' }, key_answer: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key answer', default: undefined, description: 'The key to indicate correct response' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Choices', default: jsPsych.ALL_KEYS, array: true, @@ -83,6 +83,13 @@ jsPsych.plugins["categorize-animation"] = (function() { default: null, description: 'Any content here will be displayed below the stimulus.' }, + render_on_canvas: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Render on canvas', + default: true, + description: 'If true, the images will be drawn onto a canvas element (prevents blank screen between consecutive images in some browsers).'+ + 'If false, the image will be shown via an img element.' + } } } @@ -97,9 +104,36 @@ jsPsych.plugins["categorize-animation"] = (function() { var timeoutSet = false; var correct; + if (trial.render_on_canvas) { + // first clear the display element (because the render_on_canvas method appends to display_element instead of overwriting it with .innerHTML) + if (display_element.hasChildNodes()) { + // can't loop through child list because the list will be modified by .removeChild() + while (display_element.firstChild) { + display_element.removeChild(display_element.firstChild); + } + } + var canvas = document.createElement("canvas"); + canvas.id = "jspsych-categorize-animation-stimulus"; + canvas.style.margin = 0; + canvas.style.padding = 0; + display_element.insertBefore(canvas, null); + var ctx = canvas.getContext("2d"); + if (trial.prompt !== null) { + var prompt_div = document.createElement("div"); + prompt_div.id = "jspsych-categorize-animation-prompt"; + prompt_div.style.visibility = "hidden"; + prompt_div.innerHTML = trial.prompt; + display_element.insertBefore(prompt_div, canvas.nextElementSibling); + } + var feedback_div = document.createElement("div"); + display_element.insertBefore(feedback_div, display_element.nextElementSibling); + } + // show animation var animate_interval = setInterval(function() { - display_element.innerHTML = ''; // clear everything + if (!trial.render_on_canvas) { + display_element.innerHTML = ''; // clear everything + } animate_frame++; if (animate_frame == trial.stimuli.length) { animate_frame = 0; @@ -112,20 +146,45 @@ jsPsych.plugins["categorize-animation"] = (function() { } if (showAnimation) { - display_element.innerHTML += ''; + if (trial.render_on_canvas) { + display_element.querySelector('#jspsych-categorize-animation-stimulus').style.visibility = 'visible'; + var img = new Image(); + img.src = trial.stimuli[animate_frame]; + canvas.height = img.naturalHeight; + canvas.width = img.naturalWidth; + ctx.drawImage(img,0,0); + } else { + display_element.innerHTML += ''; + } } if (!responded && trial.allow_response_before_complete) { // in here if the user can respond before the animation is done if (trial.prompt !== null) { - display_element.innerHTML += trial.prompt; + if (trial.render_on_canvas) { + prompt_div.style.visibility = "visible"; + } else { + display_element.innerHTML += trial.prompt; + } + } + if (trial.render_on_canvas) { + if (!showAnimation) { + canvas.remove(); + } } } else if (!responded) { // in here if the user has to wait to respond until animation is done. // if this is the case, don't show the prompt until the animation is over. if (!showAnimation) { if (trial.prompt !== null) { - display_element.innerHTML += trial.prompt; + if (trial.render_on_canvas) { + prompt_div.style.visibility = "visible"; + } else { + display_element.innerHTML += trial.prompt; + } + } + if (trial.render_on_canvas) { + canvas.remove(); } } } else { @@ -138,7 +197,14 @@ jsPsych.plugins["categorize-animation"] = (function() { } else { feedback_text = trial.incorrect_text.replace("%ANS%", trial.text_answer); } - display_element.innerHTML += feedback_text; + if (trial.render_on_canvas) { + if (trial.prompt !== null) { + prompt_div.remove(); + } + feedback_div.innerHTML = feedback_text; + } else { + display_element.innerHTML += feedback_text; + } // set timeout to clear feedback if (!timeoutSet) { @@ -164,17 +230,17 @@ jsPsych.plugins["categorize-animation"] = (function() { } correct = false; - if (trial.key_answer == info.key) { + if (jsPsych.pluginAPI.compareKeys(trial.key_answer, info.key)) { correct = true; } responded = true; trial_data = { - "stimulus": JSON.stringify(trial.stimuli), - "rt": info.rt, - "correct": correct, - "key_press": info.key + stimulus: trial.stimuli, + rt: info.rt, + correct: correct, + response: info.key }; jsPsych.pluginAPI.cancelKeyboardResponse(keyboard_listener); diff --git a/plugins/jspsych-categorize-html.js b/plugins/jspsych-categorize-html.js index a770f343ec..7ba5fdbfa7 100644 --- a/plugins/jspsych-categorize-html.js +++ b/plugins/jspsych-categorize-html.js @@ -21,13 +21,13 @@ jsPsych.plugins['categorize-html'] = (function() { description: 'The HTML content to be displayed.' }, key_answer: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key answer', default: undefined, description: 'The key to indicate the correct response.' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Choices', default: jsPsych.ALL_KEYS, array: true, @@ -130,16 +130,16 @@ jsPsych.plugins['categorize-html'] = (function() { jsPsych.pluginAPI.cancelAllKeyboardResponses(); var correct = false; - if (trial.key_answer == info.key) { + if (jsPsych.pluginAPI.compareKeys(trial.key_answer,info.key)) { correct = true; } // save data trial_data = { - "rt": info.rt, - "correct": correct, - "stimulus": trial.stimulus, - "key_press": info.key + rt: info.rt, + correct: correct, + stimulus: trial.stimulus, + response: info.key }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-categorize-image.js b/plugins/jspsych-categorize-image.js index 8b2eca1714..6ddab6f0ab 100644 --- a/plugins/jspsych-categorize-image.js +++ b/plugins/jspsych-categorize-image.js @@ -23,13 +23,13 @@ jsPsych.plugins['categorize-image'] = (function() { description: 'The image content to be displayed.' }, key_answer: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key answer', default: undefined, description: 'The key to indicate the correct response.' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Choices', default: jsPsych.ALL_KEYS, array: true, @@ -132,16 +132,16 @@ jsPsych.plugins['categorize-image'] = (function() { jsPsych.pluginAPI.cancelAllKeyboardResponses(); var correct = false; - if (trial.key_answer == info.key) { + if (jsPsych.pluginAPI.compareKeys(trial.key_answer, info.key)) { correct = true; } // save data trial_data = { - "rt": info.rt, - "correct": correct, - "stimulus": trial.stimulus, - "key_press": info.key + rt: info.rt, + correct: correct, + stimulus: trial.stimulus, + response: info.key }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-cloze.js b/plugins/jspsych-cloze.js index c954f4898d..2b97827c51 100644 --- a/plugins/jspsych-cloze.js +++ b/plugins/jspsych-cloze.js @@ -48,7 +48,7 @@ jsPsych.plugins['cloze'] = (function () { var elements = trial.text.split('%'); var solutions = []; - for (i=0; i'; + 'style="position: relative; width:'+trial.sort_area_width+'px; height:'+trial.sort_area_height+'px; margin: auto;"'; + + // another div for border + html += '
'+get_counter_text(trial.stimuli.length)+'

'; + + // position prompt above or below + if (trial.prompt_location == "below") { + html += html_text + } else { + html = html_text + html } + // add button + html += '
'; display_element.innerHTML = html; // store initial location data - var init_locations = []; + let init_locations = []; - for (var i = 0; i < trial.stimuli.length; i++) { - var coords = random_coordinate(trial.sort_area_width - trial.stim_width, trial.sort_area_height - trial.stim_height); + if (!trial.stim_starts_inside) { + // determine number of rows and colums, must be a even number + let num_rows = Math.ceil(Math.sqrt(trial.stimuli.length)) + if ( num_rows % 2 != 0) { + num_rows = num_rows + 1 + } + + // compute coords for left and right side of arena + var r_coords = []; + var l_coords = []; + for (const x of make_arr(0, trial.sort_area_width - trial.stim_width, num_rows) ) { + for (const y of make_arr(0, trial.sort_area_height - trial.stim_height, num_rows) ) { + if ( x > ( (trial.sort_area_width - trial.stim_width) * .5 ) ) { + //r_coords.push({ x:x, y:y } ) + r_coords.push({ x:x + (trial.sort_area_width) * (.5*trial.column_spread_factor) , y:y }); + } else { + l_coords.push({ x:x - (trial.sort_area_width) * (.5*trial.column_spread_factor) , y:y }); + //l_coords.push({ x:x, y:y } ) + } + } + } + + // repeat coordinates until you have enough coords (may be obsolete) + while ( ( r_coords.length + l_coords.length ) < trial.stimuli.length ) { + r_coords = r_coords.concat(r_coords) + l_coords = l_coords.concat(l_coords) + } + // reverse left coords, so that coords closest to arena is used first + l_coords = l_coords.reverse() + + // shuffle stimuli, so that starting positions are random + trial.stimuli = shuffle(trial.stimuli); + } + let inside = [] + for (let i = 0; i < trial.stimuli.length; i++) { + var coords; + if (trial.stim_starts_inside) { + coords = random_coordinate(trial.sort_area_width - trial.stim_width, trial.sort_area_height - trial.stim_height); + } else { + if ( (i % 2) == 0 ) { + coords = r_coords[Math.floor(i * .5)]; + } else { + coords = l_coords[Math.floor(i * .5)]; + } + } + display_element.querySelector("#jspsych-free-sort-arena").innerHTML += ''+ ''; init_locations.push({ - "src": trial.stimuli[i], - "x": coords.x, - "y": coords.y + src: trial.stimuli[i], + x: coords.x, + y: coords.y }); + if (trial.stim_starts_inside) { + inside.push(true); + } else { + inside.push(false); + } + } + + // moves within a trial + let moves = []; + + // are objects currently inside + let cur_in = false + + // draggable items + const draggables = display_element.querySelectorAll('.jspsych-free-sort-draggable'); + + // button (will show when all items are inside) and border (will change color) + const border = display_element.querySelector("#jspsych-free-sort-border") + const button = display_element.querySelector('#jspsych-free-sort-done-btn') + + // when trial starts, modify text and border/background if all items are inside (stim_starts_inside: true) + if (inside.some(Boolean) && trial.change_border_background_color) { + border.style.borderColor = trial.border_color_in; } + if (inside.every(Boolean)) { + if (trial.change_border_background_color) { + border.style.background = trial.border_color_in; + } + button.style.visibility = "visible"; + display_element.querySelector("#jspsych-free-sort-counter").innerHTML = trial.counter_text_finished; + } - display_element.innerHTML += ''; + let start_event_name = 'mousedown'; + let move_event_name = 'mousemove'; + let end_event_name = 'mouseup'; + if (typeof document.ontouchend !== 'undefined'){ // for touch devices + start_event_name = 'touchstart' + move_event_name = 'touchmove' + end_event_name = 'touchend' + } - var maxz = 1; + for(let i=0; i 1) { + text_out += "s"; + } + } + } + return text_out; + } }; // helper functions - function random_coordinate(max_width, max_height) { - var rnd_x = Math.floor(Math.random() * (max_width - 1)); - var rnd_y = Math.floor(Math.random() * (max_height - 1)); + function shuffle(array) { + // define three variables + let cur_idx = array.length, tmp_val, rand_idx; + + // While there remain elements to shuffle... + while (0 !== cur_idx) { + // Pick a remaining element... + rand_idx = Math.floor(Math.random() * cur_idx); + cur_idx -= 1; + // And swap it with the current element. + tmp_val = array[cur_idx]; + array[cur_idx] = array[rand_idx]; + array[rand_idx] = tmp_val; + } + return array; + } + + function make_arr(startValue, stopValue, cardinality) { + const step = (stopValue - startValue) / (cardinality - 1); + let arr = []; + for (let i = 0; i < cardinality; i++) { + arr.push(startValue + (step * i)); + } + return arr; + } + + function inside_ellipse(x, y, x0, y0, rx, ry, square=false) { + const results = []; + if (square) { + result = ( Math.abs(x - x0) <= rx ) && ( Math.abs(y - y0) <= ry ) + } else { + result = (( x - x0 ) * ( x - x0 )) * (ry * ry) + ((y - y0) * ( y - y0 )) * ( rx * rx ) <= ( (rx * rx) * (ry * ry) ) + } + return result + } + + function random_coordinate(max_width, max_height) { + const rnd_x = Math.floor(Math.random() * (max_width - 1)); + const rnd_y = Math.floor(Math.random() * (max_height - 1)); return { x: rnd_x, y: rnd_y diff --git a/plugins/jspsych-fullscreen.js b/plugins/jspsych-fullscreen.js index e6a549c974..f31248e0ba 100644 --- a/plugins/jspsych-fullscreen.js +++ b/plugins/jspsych-fullscreen.js @@ -69,14 +69,16 @@ jsPsych.plugins.fullscreen = (function() { endTrial(); }); } else { - if (document.exitFullscreen) { - document.exitFullscreen(); - } else if (document.msExitFullscreen) { - document.msExitFullscreen(); - } else if (document.mozCancelFullScreen) { - document.mozCancelFullScreen(); - } else if (document.webkitExitFullscreen) { - document.webkitExitFullscreen(); + if ( document.fullscreenElement || document.mozFullScreenElement || document.webkitFullscreenElement ) { + if (document.exitFullscreen) { + document.exitFullscreen(); + } else if (document.msExitFullscreen) { + document.msExitFullscreen(); + } else if (document.mozCancelFullScreen) { + document.mozCancelFullScreen(); + } else if (document.webkitExitFullscreen) { + document.webkitExitFullscreen(); + } } endTrial(); } diff --git a/plugins/jspsych-html-button-response.js b/plugins/jspsych-html-button-response.js index 727cfdf3f4..28f6ecb6dd 100644 --- a/plugins/jspsych-html-button-response.js +++ b/plugins/jspsych-html-button-response.js @@ -2,7 +2,7 @@ * jspsych-html-button-response * Josh de Leeuw * - * plugin for displaying a stimulus and getting a keyboard response + * plugin for displaying a stimulus and getting a button response * * documentation: docs.jspsych.org * @@ -129,7 +129,7 @@ jsPsych.plugins["html-button-response"] = (function() { // measure rt var end_time = performance.now(); var rt = end_time - start_time; - response.button = choice; + response.button = parseInt(choice); response.rt = rt; // after a valid response, the stimulus will have the CSS class 'responded' @@ -156,9 +156,9 @@ jsPsych.plugins["html-button-response"] = (function() { // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "button_pressed": response.button + rt: response.rt, + stimulus: trial.stimulus, + response: response.button }; // clear the display diff --git a/plugins/jspsych-html-keyboard-response.js b/plugins/jspsych-html-keyboard-response.js index f225de6feb..017c13ad86 100644 --- a/plugins/jspsych-html-keyboard-response.js +++ b/plugins/jspsych-html-keyboard-response.js @@ -24,7 +24,7 @@ jsPsych.plugins["html-keyboard-response"] = (function() { description: 'The HTML string to be displayed' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, array: true, pretty_name: 'Choices', default: jsPsych.ALL_KEYS, @@ -89,9 +89,9 @@ jsPsych.plugins["html-keyboard-response"] = (function() { // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "key_press": response.key + rt: response.rt, + stimulus: trial.stimulus, + response: response.key }; // clear the display diff --git a/plugins/jspsych-html-slider-response.js b/plugins/jspsych-html-slider-response.js index 94a9e20ce0..f76008471d 100644 --- a/plugins/jspsych-html-slider-response.js +++ b/plugins/jspsych-html-slider-response.js @@ -35,7 +35,7 @@ jsPsych.plugins['html-slider-response'] = (function() { default: 100, description: 'Sets the maximum value of the slider', }, - start: { + slider_start: { type: jsPsych.plugins.parameterType.INT, pretty_name: 'Slider starting value', default: 50, @@ -102,19 +102,27 @@ jsPsych.plugins['html-slider-response'] = (function() { plugin.trial = function(display_element, trial) { + // half of the thumb width value from jspsych.css, used to adjust the label positions + var half_thumb_width = 7.5; + var html = '
'; html += '
' + trial.stimulus + '
'; html += '
'; - html += ''; + html += ''; html += '
' for(var j=0; j < trial.labels.length; j++){ - var width = 100/(trial.labels.length-1); - var left_offset = (j * (100 /(trial.labels.length - 1))) - (width/2); - html += '
'; + var label_width_perc = 100/(trial.labels.length-1); + var percent_of_range = j * (100/(trial.labels.length - 1)); + var percent_dist_from_center = ((percent_of_range-50)/50)*100; + var offset = (percent_dist_from_center * half_thumb_width)/100; + html += '
'; html += ''+trial.labels[j]+''; html += '
' } @@ -135,18 +143,18 @@ jsPsych.plugins['html-slider-response'] = (function() { rt: null, response: null }; - + if(trial.require_movement){ - display_element.querySelector('#jspsych-html-slider-response-response').addEventListener('change', function(){ + display_element.querySelector('#jspsych-html-slider-response-response').addEventListener('click', function(){ display_element.querySelector('#jspsych-html-slider-response-next').disabled = false; - }) + }); } display_element.querySelector('#jspsych-html-slider-response-next').addEventListener('click', function() { // measure response time var endTime = performance.now(); response.rt = endTime - startTime; - response.response = display_element.querySelector('#jspsych-html-slider-response-response').value; + response.response = display_element.querySelector('#jspsych-html-slider-response-response').valueAsNumber; if(trial.response_ends_trial){ end_trial(); @@ -162,9 +170,10 @@ jsPsych.plugins['html-slider-response'] = (function() { // save data var trialdata = { - "rt": response.rt, - "response": response.response, - "stimulus": trial.stimulus + rt: response.rt, + stimulus: trial.stimulus, + slider_start: trial.slider_start, + response: response.response }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-iat-html.js b/plugins/jspsych-iat-html.js index 14faddc033..d9efe00a73 100644 --- a/plugins/jspsych-iat-html.js +++ b/plugins/jspsych-iat-html.js @@ -2,7 +2,7 @@ * jspsych-iat * Kristin Diep * - * plugin for displaying a stimulus and getting a keyboard response + * plugin for running an IAT (Implicit Association Test) with an HTML-formatted stimulus * * documentation: docs.jspsych.org * @@ -24,15 +24,15 @@ description: 'The HTML string to be displayed.' }, left_category_key: { - type: jsPsych.plugins.parameterType.HTML_STRING, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Left category key', - default: 'E', + default: 'e', description: 'Key press that is associated with the left category label.' }, right_category_key: { - type: jsPsych.plugins.parameterType.STRING, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Right category key', - default: 'I', + default: 'i', description: 'Key press that is associated with the right category label.' }, left_category_label: { @@ -50,7 +50,7 @@ description: 'The label that is associated with the stimulus. Aligned to the right side of the page.' }, key_to_move_forward: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key to move forward', array: true, default: jsPsych.ALL_KEYS, @@ -66,7 +66,7 @@ type: jsPsych.plugins.parameterType.HTML_STRING, pretty_name: 'HTML when wrong', default: 'X', - description: 'The image to display when a user presses the wrong key.' + description: 'The HTML to display when a user presses the wrong key.' }, bottom_instructions: { type: jsPsych.plugins.parameterType.HTML_STRING, @@ -84,8 +84,8 @@ type: jsPsych.plugins.parameterType.HTML_STRING, pretty_name: 'Stimulus key association', options: ['left', 'right'], - default: 'undefined', - description: 'Stimulus will be associated with eight "left" or "right".' + default: undefined, + description: 'Stimulus will be associated with either "left" or "right".' }, response_ends_trial: { type: jsPsych.plugins.parameterType.BOOL, @@ -165,10 +165,10 @@ // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "key_press": response.key, - "correct": response.correct + rt: response.rt, + stimulus: trial.stimulus, + response: response.key, + correct: response.correct }; // clears the display @@ -178,8 +178,8 @@ jsPsych.finishTrial(trial_data); }; - var leftKeyCode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.left_category_key); - var rightKeyCode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.right_category_key); + var leftKeyCode = trial.left_category_key; + var rightKeyCode = trial.right_category_key; // function to handle responses by the subject var after_response = function(info) { @@ -194,7 +194,7 @@ } if(trial.stim_key_association == "right") { - if(response.rt !== null && response.key == rightKeyCode) { + if(response.rt !== null && jsPsych.pluginAPI.compareKeys(response.key, rightKeyCode)) { response.correct = true; if (trial.response_ends_trial) { end_trial(); @@ -226,7 +226,7 @@ } } } else if(trial.stim_key_association == "left") { - if(response.rt !== null && response.key == leftKeyCode) { + if(response.rt !== null && jsPsych.pluginAPI.compareKeys(response.key, leftKeyCode)) { response.correct = true; if (trial.response_ends_trial) { end_trial(); diff --git a/plugins/jspsych-iat-image.js b/plugins/jspsych-iat-image.js index 48b91611d7..0f37bac6f4 100644 --- a/plugins/jspsych-iat-image.js +++ b/plugins/jspsych-iat-image.js @@ -2,7 +2,7 @@ * jspsych-iat * Kristin Diep * - * plugin for displaying a stimulus and getting a keyboard response + * plugin for running an IAT (Implicit Association Test) with an image stimulus * * documentation: docs.jspsych.org * @@ -26,15 +26,15 @@ description: 'The image to be displayed' }, left_category_key: { - type: jsPsych.plugins.parameterType.HTML_STRING, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Left category key', - default: 'E', + default: 'e', description: 'Key press that is associated with the left category label.' }, right_category_key: { - type: jsPsych.plugins.parameterType.STRING, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Right category key', - default: 'I', + default: 'i', description: 'Key press that is associated with the right category label.' }, left_category_label: { @@ -52,7 +52,7 @@ description: 'The label that is associated with the stimulus. Aligned to the right side of the page.' }, key_to_move_forward: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key to move forward', array: true, default: jsPsych.ALL_KEYS, @@ -68,7 +68,7 @@ type: jsPsych.plugins.parameterType.HTML_STRING, pretty_name: 'HTML when wrong', default: 'X', - description: 'The image to display when a user presses the wrong key.' + description: 'The HTML to display when a user presses the wrong key.' }, bottom_instructions: { type: jsPsych.plugins.parameterType.HTML_STRING, @@ -86,8 +86,8 @@ type: jsPsych.plugins.parameterType.HTML_STRING, pretty_name: 'Stimulus key association', options: ['left', 'right'], - default: 'undefined', - description: 'Stimulus will be associated with eight "left" or "right".' + default: undefined, + description: 'Stimulus will be associated with either "left" or "right".' }, response_ends_trial: { type: jsPsych.plugins.parameterType.BOOL, @@ -167,10 +167,10 @@ // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "key_press": response.key, - "correct": response.correct + rt: response.rt, + stimulus: trial.stimulus, + response: response.key, + correct: response.correct }; // clears the display @@ -180,8 +180,8 @@ jsPsych.finishTrial(trial_data); }; - var leftKeyCode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.left_category_key); - var rightKeyCode = jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.right_category_key); + var leftKeyCode = trial.left_category_key; + var rightKeyCode = trial.right_category_key; // function to handle responses by the subject var after_response = function(info) { @@ -196,7 +196,7 @@ } if(trial.stim_key_association == "right") { - if(response.rt !== null && response.key == rightKeyCode) { + if(response.rt !== null && jsPsych.pluginAPI.compareKeys(response.key, rightKeyCode)) { response.correct = true; if (trial.response_ends_trial) { end_trial(); @@ -228,7 +228,7 @@ } } } else if(trial.stim_key_association == "left") { - if(response.rt !== null && response.key == leftKeyCode) { + if(response.rt !== null && jsPsych.pluginAPI.compareKeys(response.key, leftKeyCode)) { response.correct = true; if (trial.response_ends_trial) { end_trial(); diff --git a/plugins/jspsych-image-button-response.js b/plugins/jspsych-image-button-response.js index b16e5110b1..b19341af54 100644 --- a/plugins/jspsych-image-button-response.js +++ b/plugins/jspsych-image-button-response.js @@ -2,7 +2,7 @@ * jspsych-image-button-response * Josh de Leeuw * - * plugin for displaying a stimulus and getting a keyboard response + * plugin for displaying a stimulus and getting a button response * * documentation: docs.jspsych.org * @@ -92,54 +92,158 @@ jsPsych.plugins["image-button-response"] = (function() { default: true, description: 'If true, then trial will end when user responds.' }, + render_on_canvas: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Render on canvas', + default: true, + description: 'If true, the image will be drawn onto a canvas element (prevents blank screen between consecutive images in some browsers).'+ + 'If false, the image will be shown via an img element.' + } } } plugin.trial = function(display_element, trial) { - // display stimulus - var html = ''+str+'
'; + } + btngroup_div.innerHTML = html; + // add canvas to screen and draw image + display_element.insertBefore(canvas, null); + if (img.complete && Number.isFinite(width) && Number.isFinite(height)) { + // if image has loaded and width/height have been set, then draw it now + // (don't rely on img onload function to draw image when image is in the cache, because that causes a delay in the image presentation) + ctx.drawImage(img,0,0,width,height); + image_drawn = true; + } + // add buttons to screen + display_element.insertBefore(btngroup_div, canvas.nextElementSibling); + // add prompt if there is one + if (trial.prompt !== null) { + display_element.insertAdjacentHTML('beforeend', trial.prompt); } - } - html += '
'; - for (var i = 0; i < trial.choices.length; i++) { - var str = buttons[i].replace(/%choice%/g, trial.choices[i]); - html += '
'+str+'
'; - } - html += '
'; + } else { - //show prompt if there is one - if (trial.prompt !== null) { - html += trial.prompt; - } + // display stimulus as an image element + html = ''; + //display buttons + var buttons = []; + if (Array.isArray(trial.button_html)) { + if (trial.button_html.length == trial.choices.length) { + buttons = trial.button_html; + } else { + console.error('Error in image-button-response plugin. The length of the button_html array does not equal the length of the choices array'); + } + } else { + for (var i = 0; i < trial.choices.length; i++) { + buttons.push(trial.button_html); + } + } + html += '
'; + + for (var i = 0; i < trial.choices.length; i++) { + var str = buttons[i].replace(/%choice%/g, trial.choices[i]); + html += '
'+str+'
'; + } + html += '
'; + // add prompt + if (trial.prompt !== null){ + html += trial.prompt; + } + // update the page content + display_element.innerHTML = html; - display_element.innerHTML = html; + // set image dimensions after image has loaded (so that we have access to naturalHeight/naturalWidth) + var img = display_element.querySelector('#jspsych-image-button-response-stimulus'); + if (trial.stimulus_height !== null) { + height = trial.stimulus_height; + if (trial.stimulus_width == null && trial.maintain_aspect_ratio) { + width = img.naturalWidth * (trial.stimulus_height/img.naturalHeight); + } + } else { + height = img.naturalHeight; + } + if (trial.stimulus_width !== null) { + width = trial.stimulus_width; + if (trial.stimulus_height == null && trial.maintain_aspect_ratio) { + height = img.naturalHeight * (trial.stimulus_width/img.naturalWidth); + } + } else if (!(trial.stimulus_height !== null & trial.maintain_aspect_ratio)) { + // if stimulus width is null, only use the image's natural width if the width value wasn't set + // in the if statement above, based on a specified height and maintain_aspect_ratio = true + width = img.naturalWidth; + } + img.style.height = height.toString() + "px"; + img.style.width = width.toString() + "px"; + } // start timing var start_time = performance.now(); @@ -163,7 +267,7 @@ jsPsych.plugins["image-button-response"] = (function() { // measure rt var end_time = performance.now(); var rt = end_time - start_time; - response.button = choice; + response.button = parseInt(choice); response.rt = rt; // after a valid response, the stimulus will have the CSS class 'responded' @@ -190,9 +294,9 @@ jsPsych.plugins["image-button-response"] = (function() { // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "button_pressed": response.button + rt: response.rt, + stimulus: trial.stimulus, + response: response.button }; // clear the display @@ -202,8 +306,6 @@ jsPsych.plugins["image-button-response"] = (function() { jsPsych.finishTrial(trial_data); }; - - // hide image if timing is set if (trial.stimulus_duration !== null) { jsPsych.pluginAPI.setTimeout(function() { @@ -216,8 +318,9 @@ jsPsych.plugins["image-button-response"] = (function() { jsPsych.pluginAPI.setTimeout(function() { end_trial(); }, trial.trial_duration); + } else if (trial.response_ends_trial === false) { + console.warn("The experiment may be deadlocked. Try setting a trial duration or set response_ends_trial to true."); } - }; return plugin; diff --git a/plugins/jspsych-image-keyboard-response.js b/plugins/jspsych-image-keyboard-response.js index d3ce11e243..8a9dd99560 100644 --- a/plugins/jspsych-image-keyboard-response.js +++ b/plugins/jspsych-image-keyboard-response.js @@ -44,7 +44,7 @@ jsPsych.plugins["image-keyboard-response"] = (function() { description: 'Maintain the aspect ratio after setting width or height' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, array: true, pretty_name: 'Choices', default: jsPsych.ALL_KEYS, @@ -74,34 +74,114 @@ jsPsych.plugins["image-keyboard-response"] = (function() { default: true, description: 'If true, trial will end when subject makes a response.' }, + render_on_canvas: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Render on canvas', + default: true, + description: 'If true, the image will be drawn onto a canvas element (prevents blank screen between consecutive images in some browsers).'+ + 'If false, the image will be shown via an img element.' + } } } plugin.trial = function(display_element, trial) { - // display stimulus - var html = ''; + // add prompt + if (trial.prompt !== null){ + html += trial.prompt; + } + // update the page content + display_element.innerHTML = html; + + // set image dimensions after image has loaded (so that we have access to naturalHeight/naturalWidth) + var img = display_element.querySelector('#jspsych-image-keyboard-response-stimulus'); + if (trial.stimulus_height !== null) { + height = trial.stimulus_height; + if (trial.stimulus_width == null && trial.maintain_aspect_ratio) { + width = img.naturalWidth * (trial.stimulus_height/img.naturalHeight); + } + } else { + height = img.naturalHeight; + } + if (trial.stimulus_width !== null) { + width = trial.stimulus_width; + if (trial.stimulus_height == null && trial.maintain_aspect_ratio) { + height = img.naturalHeight * (trial.stimulus_width/img.naturalWidth); + } + } else if (!(trial.stimulus_height !== null & trial.maintain_aspect_ratio)) { + // if stimulus width is null, only use the image's natural width if the width value wasn't set + // in the if statement above, based on a specified height and maintain_aspect_ratio = true + width = img.naturalWidth; + } + img.style.height = height.toString() + "px"; + img.style.width = width.toString() + "px"; + } // store response var response = { @@ -122,9 +202,9 @@ jsPsych.plugins["image-keyboard-response"] = (function() { // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "key_press": response.key + rt: response.rt, + stimulus: trial.stimulus, + response: response.key }; // clear the display @@ -174,8 +254,9 @@ jsPsych.plugins["image-keyboard-response"] = (function() { jsPsych.pluginAPI.setTimeout(function() { end_trial(); }, trial.trial_duration); + } else if (trial.response_ends_trial === false) { + console.warn("The experiment may be deadlocked. Try setting a trial duration or set response_ends_trial to true."); } - }; return plugin; diff --git a/plugins/jspsych-image-slider-response.js b/plugins/jspsych-image-slider-response.js index c1fc680843..47ad31f258 100644 --- a/plugins/jspsych-image-slider-response.js +++ b/plugins/jspsych-image-slider-response.js @@ -55,7 +55,7 @@ jsPsych.plugins['image-slider-response'] = (function() { default: 100, description: 'Sets the maximum value of the slider', }, - start: { + slider_start: { type: jsPsych.plugins.parameterType.INT, pretty_name: 'Slider starting value', default: 50, @@ -117,54 +117,194 @@ jsPsych.plugins['image-slider-response'] = (function() { default: true, description: 'If true, trial will end when user makes a response.' }, + render_on_canvas: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Render on canvas', + default: true, + description: 'If true, the image will be drawn onto a canvas element (prevents blank screen between consecutive images in some browsers).'+ + 'If false, the image will be shown via an img element.' + } } } plugin.trial = function(display_element, trial) { - var html = '
'; - html += '
'; - html += ''; - html += '
' - for(var j=0; j < trial.labels.length; j++){ - var width = 100/(trial.labels.length-1); - var left_offset = (j * (100 /(trial.labels.length - 1))) - (width/2); - html += '
'; - html += ''+trial.labels[j]+''; - html += '
' - } - html += '
'; - html += '
'; - html += '
'; + getHeightWidth(); // call now, in case image loads immediately (is cached) + // create container with slider and labels + var slider_container = document.createElement('div'); + slider_container.classList.add("jspsych-image-slider-response-container"); + slider_container.style.position = "relative"; + slider_container.style.margin = "0 auto 3em auto"; + if(trial.slider_width !== null){ + slider_container.style.width = trial.slider_width.toString()+'px'; + } + // create html string with slider and labels, and add to slider container + html =''; + html += '
' + for(var j=0; j < trial.labels.length; j++){ + var label_width_perc = 100/(trial.labels.length-1); + var percent_of_range = j * (100/(trial.labels.length - 1)); + var percent_dist_from_center = ((percent_of_range-50)/50)*100; + var offset = (percent_dist_from_center * half_thumb_width)/100; + html += '
'; + html += ''+trial.labels[j]+''; + html += '
' + } + html += '
'; + slider_container.innerHTML = html; + // add canvas and slider to content wrapper div + content_wrapper.insertBefore(canvas, content_wrapper.firstElementChild); + content_wrapper.insertBefore(slider_container, canvas.nextElementSibling); + // add content wrapper div to screen and draw image on canvas + display_element.insertBefore(content_wrapper, null); + if (img.complete && Number.isFinite(width) && Number.isFinite(height)) { + // if image has loaded and width/height have been set, then draw it now + // (don't rely on img onload function to draw image when image is in the cache, because that causes a delay in the image presentation) + ctx.drawImage(img,0,0,width,height); + image_drawn = true; + } + // add prompt if there is one + if (trial.prompt !== null) { + display_element.insertAdjacentHTML('beforeend', trial.prompt); + } + // add submit button + var submit_btn = document.createElement('button'); + submit_btn.id = "jspsych-image-slider-response-next"; + submit_btn.classList.add("jspsych-btn"); + submit_btn.disabled = (trial.require_movement) ? true : false; + submit_btn.innerHTML = trial.button_label; + display_element.insertBefore(submit_btn, display_element.nextElementSibling); - if (trial.prompt !== null){ - html += trial.prompt; - } + } else { - // add submit button - html += ''; + html = '
'; + html += '
'; + html += ''; + html += '
'; + html += '
'; + html += ''; + html += '
' + for(var j=0; j < trial.labels.length; j++){ + var label_width_perc = 100/(trial.labels.length-1); + var percent_of_range = j * (100/(trial.labels.length - 1)); + var percent_dist_from_center = ((percent_of_range-50)/50)*100; + var offset = (percent_dist_from_center * half_thumb_width)/100; + html += '
'; + html += ''+trial.labels[j]+''; + html += '
' + } + html += '
'; + html += '
'; + html += '
'; + + if (trial.prompt !== null){ + html += trial.prompt; + } - display_element.innerHTML = html; + // add submit button + html += ''; + + display_element.innerHTML = html; + + // set image dimensions after image has loaded (so that we have access to naturalHeight/naturalWidth) + var img = display_element.querySelector('img'); + if (trial.stimulus_height !== null) { + height = trial.stimulus_height; + if (trial.stimulus_width == null && trial.maintain_aspect_ratio) { + width = img.naturalWidth * (trial.stimulus_height/img.naturalHeight); + } + } else { + height = img.naturalHeight; + } + if (trial.stimulus_width !== null) { + width = trial.stimulus_width; + if (trial.stimulus_height == null && trial.maintain_aspect_ratio) { + height = img.naturalHeight * (trial.stimulus_width/img.naturalWidth); + } + } else if (!(trial.stimulus_height !== null & trial.maintain_aspect_ratio)) { + // if stimulus width is null, only use the image's natural width if the width value wasn't set + // in the if statement above, based on a specified height and maintain_aspect_ratio = true + width = img.naturalWidth; + } + img.style.height = height.toString() + "px"; + img.style.width = width.toString() + "px"; + } var response = { rt: null, @@ -172,16 +312,16 @@ jsPsych.plugins['image-slider-response'] = (function() { }; if(trial.require_movement){ - display_element.querySelector('#jspsych-image-slider-response-response').addEventListener('change', function(){ + display_element.querySelector('#jspsych-image-slider-response-response').addEventListener('click', function(){ display_element.querySelector('#jspsych-image-slider-response-next').disabled = false; - }) + }); } display_element.querySelector('#jspsych-image-slider-response-next').addEventListener('click', function() { // measure response time var endTime = performance.now(); response.rt = endTime - startTime; - response.response = display_element.querySelector('#jspsych-image-slider-response-response').value; + response.response = display_element.querySelector('#jspsych-image-slider-response-response').valueAsNumber; if(trial.response_ends_trial){ end_trial(); @@ -197,8 +337,10 @@ jsPsych.plugins['image-slider-response'] = (function() { // save data var trialdata = { - "rt": response.rt, - "response": response.response + rt: response.rt, + stimulus: trial.stimulus, + slider_start: trial.slider_start, + response: response.response }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-instructions.js b/plugins/jspsych-instructions.js index 3b5ea4c048..70ff290014 100644 --- a/plugins/jspsych-instructions.js +++ b/plugins/jspsych-instructions.js @@ -28,15 +28,15 @@ jsPsych.plugins.instructions = (function() { description: 'Each element of the array is the content for a single page.' }, key_forward: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key forward', - default: 'rightarrow', + default: 'ArrowRight', description: 'The key the subject can press in order to advance to the next page.' }, key_backward: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key backward', - default: 'leftarrow', + default: 'ArrowLeft', description: 'The key that the subject can press to return to the previous page.' }, allow_backward: { @@ -63,6 +63,12 @@ jsPsych.plugins.instructions = (function() { default: false, description: 'If true, and clickable navigation is enabled, then Page x/y will be shown between the nav buttons.' }, + page_label: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Page label', + default: 'Page', + description: 'The text that appears before x/y (current/total) pages displayed with show_page_number' + }, button_label_previous: { type: jsPsych.plugins.parameterType.STRING, pretty_name: 'Button label previous', @@ -104,7 +110,7 @@ jsPsych.plugins.instructions = (function() { var pagenum_display = ""; if(trial.show_page_number) { pagenum_display = "Page "+(current_page+1)+"/"+trial.pages.length+""; + "jspsych-instructions-pagenum'>"+ trial.page_label + ' ' +(current_page+1)+"/"+trial.pages.length+""; } if (trial.show_clickable_nav) { @@ -185,8 +191,8 @@ jsPsych.plugins.instructions = (function() { display_element.innerHTML = ''; var trial_data = { - "view_history": JSON.stringify(view_history), - "rt": performance.now() - start_time + view_history: view_history, + rt: performance.now() - start_time }; jsPsych.finishTrial(trial_data); diff --git a/plugins/jspsych-maxdiff.js b/plugins/jspsych-maxdiff.js new file mode 100644 index 0000000000..d8572e04cd --- /dev/null +++ b/plugins/jspsych-maxdiff.js @@ -0,0 +1,173 @@ +/** + * jspsych-maxdiff + * Angus Hughes + * + * a jspsych plugin for maxdiff/conjoint analysis designs + * + */ + +jsPsych.plugins['maxdiff'] = (function () { + + var plugin = {}; + + plugin.info = { + name: 'maxdiff', + description: '', + parameters: { + alternatives: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Alternatives', + array: true, + default: undefined, + description: 'Alternatives presented in the maxdiff table.' + }, + labels: { + type: jsPsych.plugins.parameterType.STRING, + array: true, + pretty_name: 'Labels', + default: undefined, + description: 'Labels to display for left and right response columns.' + }, + randomize_alternative_order: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Randomize Alternative Order', + default: false, + description: 'If true, the order of the alternatives will be randomized' + }, + preamble: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Preamble', + default: '', + description: 'String to display at top of the page.' + }, + button_label: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: 'Button Label', + default: 'Continue', + description: 'Label of the button.' + }, + required: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Required', + default: false, + description: 'Makes answering the alternative required.' + } + } + } + + plugin.trial = function (display_element, trial) { + + var html = ""; + // inject CSS for trial + html += ''; + + // show preamble text + if (trial.preamble !== null) { + html += '
' + trial.preamble + '
'; + } + html += '
'; + + // add maxdiff options /// + // first generate alternative order, randomized here as opposed to randomizing the order of alternatives + // so that the data are always associated with the same alternative regardless of order. + var alternative_order = []; + for (var i = 0; i < trial.alternatives.length; i++) { + alternative_order.push(i); + } + if (trial.randomize_alternative_order) { + alternative_order = jsPsych.randomization.shuffle(alternative_order); + } + + // Start with column headings + var maxdiff_table = ''; + + // construct each row of the maxdiff table + for (var i = 0; i < trial.alternatives.length; i++) { + var alternative = trial.alternatives[alternative_order[i]]; + // add alternative + maxdiff_table += ''; + maxdiff_table += ''; + maxdiff_table += ''; + } + maxdiff_table += '
' + trial.labels[0] + '' + trial.labels[1] + '

' + alternative + '


'; + html += maxdiff_table; + + // add submit button + var enable_submit = trial.required == true ? 'disabled = "disabled"' : ''; + html += ''; + html += '
'; + + display_element.innerHTML = html; + + // function to control responses + // first checks that the same alternative cannot be endorsed in the left and right columns simultaneously. + // then enables the submit button if the trial is required. + const left_right = ["left", "right"] + left_right.forEach(function(p) { + // Get all elements either 'left' or 'right' + document.getElementsByName(p).forEach(function(alt) { + alt.addEventListener('click', function() { + // Find the opposite (if left, then right & vice versa) identified by the class (jspsych-maxdiff-alt-1, 2, etc) + var op = alt.name == 'left' ? 'right' : 'left'; + var n = document.getElementsByClassName(alt.className).namedItem(op); + // If it's checked, uncheck it. + if (n.checked) { + n.checked = false; + } + + // check response + if (trial.required){ + // Now check if one of both left and right have been enabled to allow submission + var left_checked = [...document.getElementsByName('left')].some(c => c.checked); + var right_checked = [...document.getElementsByName('right')].some(c => c.checked); + if (left_checked && right_checked) { + document.getElementById("jspsych-maxdiff-next").disabled = false; + } else { + document.getElementById("jspsych-maxdiff-next").disabled = true; + } + } + }); + }); + }); + + // Get the data once the submit button is clicked + // Get the data once the submit button is clicked + display_element.querySelector('#jspsych-maxdiff-form').addEventListener('submit', function(e){ + e.preventDefault(); + + // measure response time + var endTime = performance.now(); + var response_time = endTime - startTime; + + // get the alternative by the data-name attribute, allowing a null response if unchecked + get_response = function(side){ + var col = display_element.querySelectorAll('[name=\"' + side + '\"]:checked')[0]; + if (col === undefined){ + return null; + } else { + var i = parseInt(col.getAttribute('data-name')); + return trial.alternatives[i]; + } + } + + // data saving + var trial_data = { + rt: response_time, + labels: {left: trial.labels[0], right: trial.labels[1]}, + response: {left: get_response('left'), right: get_response('right')} + }; + + // next trial + jsPsych.finishTrial(trial_data); + }); + + var startTime = performance.now(); + }; + + return plugin; +})(); \ No newline at end of file diff --git a/plugins/jspsych-preload.js b/plugins/jspsych-preload.js new file mode 100644 index 0000000000..8a8e1c0b54 --- /dev/null +++ b/plugins/jspsych-preload.js @@ -0,0 +1,345 @@ +/** + * jspsych-preload + * documentation: docs.jspsych.org + **/ + +jsPsych.plugins['preload'] = (function() { + + var plugin = {}; + + plugin.info = { + name: 'preload', + description: '', + parameters: { + auto_preload: { + type: jsPsych.plugins.parameterType.BOOL, + default: false, + description: 'Whether or not to automatically preload any media files based on the timeline passed to jsPsych.init.' + }, + trials: { + type: jsPsych.plugins.parameterType.TIMELINE, + default: [], + description: 'Array with a timeline of trials to automatically preload. If one or more trial objects is provided, '+ + 'then the plugin will attempt to preload the media files used in the trial(s).' + }, + images: { + type: jsPsych.plugins.parameterType.STRING, + default: [], + description: 'Array with one or more image files to load. This parameter is often used in cases where media files cannot '+ + 'be automatically preloaded based on the timeline, e.g. because the media files are passed into an image plugin/parameter with '+ + 'timeline variables or dynamic parameters, or because the image is embedded in an HTML string.' + }, + audio: { + type: jsPsych.plugins.parameterType.STRING, + default: [], + description: 'Array with one or more audio files to load. This parameter is often used in cases where media files cannot '+ + 'be automatically preloaded based on the timeline, e.g. because the media files are passed into an audio plugin/parameter with '+ + 'timeline variables or dynamic parameters, or because the audio is embedded in an HTML string.' + }, + video: { + type: jsPsych.plugins.parameterType.STRING, + default: [], + description: 'Array with one or more video files to load. This parameter is often used in cases where media files cannot '+ + 'be automatically preloaded based on the timeline, e.g. because the media files are passed into a video plugin/parameter with '+ + 'timeline variables or dynamic parameters, or because the video is embedded in an HTML string.' + }, + message: { + type: jsPsych.plugins.parameterType.HTML_STRING, + default: null, + description: 'HTML-formatted message to be shown above the progress bar while the files are loading.' + }, + show_progress_bar: { + type: jsPsych.plugins.parameterType.BOOL, + default: true, + description: 'Whether or not to show the loading progress bar.' + }, + continue_after_error: { + type: jsPsych.plugins.parameterType.BOOL, + default: false, + description: 'Whether or not to continue with the experiment if a loading error occurs. If false, then if a loading error occurs, '+ + 'the error_message will be shown on the page and the trial will not end. If true, then if if a loading error occurs, the trial will end '+ + 'and preloading failure will be logged in the trial data.' + }, + error_message: { + type: jsPsych.plugins.parameterType.HTML_STRING, + default: 'The experiment failed to load.', + description: 'Error message to show on the page in case of any loading errors. This parameter is only relevant when continue_after_error is false.' + }, + show_detailed_errors: { + type: jsPsych.plugins.parameterType.BOOL, + default: false, + description: 'Whether or not to show a detailed error message on the page. If true, then detailed error messages will be shown on the '+ + 'page for all files that failed to load, along with the general error_message. This parameter is only relevant when continue_after_error is false.' + }, + max_load_time: { + type: jsPsych.plugins.parameterType.INT, + default: null, + description: 'The maximum amount of time that the plugin should wait before stopping the preload and either ending the trial '+ + '(if continue_after_error is true) or stopping the experiment with an error message (if continue_after_error is false). '+ + 'If null, the plugin will wait indefintely for the files to load.' + }, + on_error: { + type: jsPsych.plugins.parameterType.FUNCTION, + default: null, + description: 'Function to be called after a file fails to load. The function takes the file name as its only argument.' + }, + on_success: { + type: jsPsych.plugins.parameterType.FUNCTION, + default: null, + description: 'Function to be called after a file loads successfully. The function takes the file name as its only argument.' + } + } + } + + plugin.trial = function(display_element, trial) { + + var success = null; + var timeout = false; + var failed_images = []; + var failed_audio = []; + var failed_video = []; + var detailed_errors = []; + var in_safe_mode = jsPsych.getSafeModeStatus(); + + // create list of media to preload // + + var images = []; + var audio = []; + var video = []; + + if(trial.auto_preload){ + var auto_preload = jsPsych.pluginAPI.getAutoPreloadList(); + images = images.concat(auto_preload.images); + audio = audio.concat(auto_preload.audio); + video = video.concat(auto_preload.video); + } + + if(trial.trials.length > 0){ + var trial_preloads = jsPsych.pluginAPI.getAutoPreloadList(trial.trials); + images = images.concat(trial_preloads.images); + audio = audio.concat(trial_preloads.audio); + video = video.concat(trial_preloads.video); + } + + images = images.concat(trial.images); + audio = audio.concat(trial.audio); + video = video.concat(trial.video); + + images = jsPsych.utils.unique(jsPsych.utils.flatten(images)); + audio = jsPsych.utils.unique(jsPsych.utils.flatten(audio)); + video = jsPsych.utils.unique(jsPsych.utils.flatten(video)); + + if (in_safe_mode) { + // don't preload video if in safe mode (experiment is running via file protocol) + video = []; + } + + // render display of message and progress bar + + var html = ''; + + if(trial.message !== null){ + html += trial.message; + } + + if(trial.show_progress_bar){ + html += ` +
+
+
`; + } + + display_element.innerHTML = html; + + // do preloading + + if(trial.max_load_time !== null){ + jsPsych.pluginAPI.setTimeout(on_timeout, trial.max_load_time); + } + + var total_n = images.length + audio.length + video.length; + var loaded = 0; // success or error count + var loaded_success = 0; // success count + + if (total_n == 0) { + on_success(); + } else { + function load_video(cb){ + jsPsych.pluginAPI.preloadVideo(video, cb, file_loading_success, file_loading_error); + } + function load_audio(cb){ + jsPsych.pluginAPI.preloadAudio(audio, cb, file_loading_success, file_loading_error); + } + function load_images(cb){ + jsPsych.pluginAPI.preloadImages(images, cb, file_loading_success, file_loading_error); + } + if (video.length > 0) { load_video(function () { }) } + if (audio.length > 0) { load_audio(function () { }) } + if (images.length > 0) { load_images(function () { }) } + } + + // helper functions and callbacks + + function update_loading_progress_bar(){ + loaded++; + if(trial.show_progress_bar){ + var percent_loaded = (loaded/total_n)*100; + var preload_progress_bar = jsPsych.getDisplayElement().querySelector('#jspsych-loading-progress-bar'); + if (preload_progress_bar !== null) { + preload_progress_bar.style.width = percent_loaded+"%"; + } + } + } + + // called when a single file loading fails + function file_loading_error(e) { + // update progress bar even if there's an error + update_loading_progress_bar(); + // change success flag after first file loading error + if (success == null) { + success = false; + } + // add file to failed media list + var source = "unknown file"; + if (e.source) { + source = e.source; + } + if (e.error && e.error.path && e.error.path.length > 0) { + if (e.error.path[0].localName == "img") { + failed_images.push(source); + } else if (e.error.path[0].localName == "audio") { + failed_audio.push(source); + } else if (e.error.path[0].localName == "video") { + failed_video.push(source); + } + } + // construct detailed error message + var err_msg = '

Error loading file: '+source+'
'; + if (e.error.statusText) { + err_msg += 'File request response status: '+e.error.statusText+'
'; + } + if (e.error == "404") { + err_msg += '404 - file not found.
'; + } + if (typeof e.error.loaded !== 'undefined' && e.error.loaded !== null && e.error.loaded !== 0) { + err_msg += e.error.loaded+' bytes transferred.'; + } else { + err_msg += 'File did not begin loading. Check that file path is correct and reachable by the browser,
'+ + 'and that loading is not blocked by cross-origin resource sharing (CORS) errors.'; + } + err_msg += '

'; + detailed_errors.push(err_msg); + // call trial's on_error function + after_error(source); + // if this is the last file + if (loaded == total_n) { + if (trial.continue_after_error) { + // if continue_after_error is false, then stop with an error + end_trial(); + } else { + // otherwise end the trial and continue + stop_with_error_message(); + } + } + } + + // called when a single file loads successfully + function file_loading_success(source) { + update_loading_progress_bar(); + // call trial's on_success function + after_success(source); + loaded_success++; + if (loaded_success == total_n) { + // if this is the last file and all loaded successfully, call success function + on_success(); + } else if (loaded == total_n) { + // if this is the last file and there was at least one error + if (trial.continue_after_error) { + // end the trial and continue with experiment + end_trial(); + } else { + // if continue_after_error is false, then stop with an error + stop_with_error_message(); + } + } + } + + // called if all files load successfully + function on_success() { + if (typeof timeout !== 'undefined' && timeout === false) { + // clear timeout immediately after finishing, to handle race condition with max_load_time + jsPsych.pluginAPI.clearAllTimeouts(); + // need to call cancel preload function to clear global jsPsych preload_request list, even when they've all succeeded + jsPsych.pluginAPI.cancelPreloads(); + success = true; + end_trial(); + } + } + + // called if all_files haven't finished loading when max_load_time is reached + function on_timeout() { + //console.log('timeout fired'); + jsPsych.pluginAPI.cancelPreloads(); + if (typeof success !== 'undefined' && (success === false || success === null)) { + timeout = true; + if (loaded_success < total_n) { + success = false; + } + after_error('timeout'); // call trial's on_error event handler here, in case loading timed out with no file errors + detailed_errors.push('

Loading timed out.
'+ + 'Consider compressing your stimuli files, loading your files in smaller batches,
'+ + 'and/or increasing the max_load_time parameter.

'); + if (trial.continue_after_error) { + end_trial(); + } else { + stop_with_error_message(); + } + } + } + + function stop_with_error_message() { + jsPsych.pluginAPI.clearAllTimeouts(); + jsPsych.pluginAPI.cancelPreloads(); + // show error message + display_element.innerHTML = trial.error_message; + // show detailed errors, if necessary + if (trial.show_detailed_errors) { + display_element.innerHTML += '

Error details:

'; + detailed_errors.forEach(function(e) { + display_element.innerHTML += e; + }); + } + } + + function after_error(source) { + // call on_error function and pass file name + if (trial.on_error !== null) { + trial.on_error(source); + } + } + function after_success(source) { + // call on_success function and pass file name + if (trial.on_success !== null) { + trial.on_success(source); + } + } + + function end_trial(){ + // clear timeout again when end_trial is called, to handle race condition with max_load_time + jsPsych.pluginAPI.clearAllTimeouts(); + var trial_data = { + success: success, + timeout: timeout, + failed_images: failed_images, + failed_audio: failed_audio, + failed_video: failed_video + }; + // clear the display + display_element.innerHTML = ''; + jsPsych.finishTrial(trial_data); + } + }; + + return plugin; + })(); + \ No newline at end of file diff --git a/plugins/jspsych-rdk.js b/plugins/jspsych-rdk.js index 40febc5d34..80674f1f7d 100644 --- a/plugins/jspsych-rdk.js +++ b/plugins/jspsych-rdk.js @@ -37,14 +37,14 @@ jsPsych.plugins["rdk"] = (function() { name: "rdk", parameters: { choices: { - type: jsPsych.plugins.parameterType.INT, + type: jsPsych.plugins.parameterType.KEY, pretty_name: "Choices", - default: [], + default: jsPsych.ALL_KEYS, array: true, description: "The valid keys that the subject can press to indicate a response" }, correct_choice: { - type: jsPsych.plugins.parameterType.STRING, + type: jsPsych.plugins.parameterType.KEY, pretty_name: "Correct choice", default: undefined, array: true, @@ -371,7 +371,11 @@ jsPsych.plugins["rdk"] = (function() { //Remove the margins and padding of the canvas canvas.style.margin = 0; - canvas.style.padding = 0; + canvas.style.padding = 0; + // use absolute positioning in top left corner to get rid of scroll bars + canvas.style.position = 'absolute'; + canvas.style.top = 0; + canvas.style.left = 0; //Get the context of the canvas so that it can be painted on. var ctx = canvas.getContext("2d"); @@ -518,45 +522,44 @@ jsPsych.plugins["rdk"] = (function() { //Place all the data to be saved from this trial in one data object var trial_data = { - "rt": response.rt, //The response time - "key_press": response.key, //The key that the subject pressed - "correct": correctOrNot(), //If the subject response was correct - "choices": trial.choices, //The set of valid keys - "correct_choice": trial.correct_choice, //The correct choice - "trial_duration": trial.trial_duration, //The trial duration - "response_ends_trial": trial.response_ends_trial, //If the response ends the trial - "number_of_apertures": trial.number_of_apertures, - "number_of_dots": trial.number_of_dots, - "number_of_sets": trial.number_of_sets, - "coherent_direction": trial.coherent_direction, - "coherence": trial.coherence, - "opposite_coherence": trial.opposite_coherence, - "dot_radius": trial.dot_radius, - "dot_life": trial.dot_life, - "move_distance": trial.move_distance, - "aperture_width": trial.aperture_width, - "aperture_height": trial.aperture_height, - "dot_color": trial.dot_color, - "background_color": trial.background_color, - "RDK_type": trial.RDK_type, - "aperture_type": trial.aperture_type, - "reinsert_type": trial.reinsert_type, - "frame_rate": frameRate, //The average frame rate for the trial - "frame_rate_array": JSON.stringify(frameRateArray), //The array of ms per frame in this trial, in the form of a JSON string - "number_of_frames": numberOfFrames, //The number of frames in this trial - "aperture_center_x": trial.aperture_center_x, - "aperture_center_y": trial.aperture_center_y, - "fixation_cross": trial.fixation_cross, - "fixation_cross_width": trial.fixation_cross_width, - "fixation_cross_height": trial.fixation_cross_height, - "fixation_cross_color": trial.fixation_cross_color, - "fixation_cross_thickness": trial.fixation_cross_thickness, - "border": trial.border, - "border_thickness": trial.border_thickness, - "border_color": trial.border_color, - "canvas_width": canvasWidth, - "canvas_height": canvasHeight - + rt: response.rt, //The response time + response: response.key, //The key that the subject pressed + correct: correctOrNot(), //If the subject response was correct + choices: trial.choices, //The set of valid keys + correct_choice: trial.correct_choice, //The correct choice + trial_duration: trial.trial_duration, //The trial duration + response_ends_trial: trial.response_ends_trial, //If the response ends the trial + number_of_apertures: trial.number_of_apertures, + number_of_dots: trial.number_of_dots, + number_of_sets: trial.number_of_sets, + coherent_direction: trial.coherent_direction, + coherence: trial.coherence, + opposite_coherence: trial.opposite_coherence, + dot_radius: trial.dot_radius, + dot_life: trial.dot_life, + move_distance: trial.move_distance, + aperture_width: trial.aperture_width, + aperture_height: trial.aperture_height, + dot_color: trial.dot_color, + background_color: trial.background_color, + RDK_type: trial.RDK_type, + aperture_type: trial.aperture_type, + reinsert_type: trial.reinsert_type, + frame_rate: frameRate, //The average frame rate for the trial + frame_rate_array: frameRateArray, //The array of ms per frame in this trial + number_of_frames: numberOfFrames, //The number of frames in this trial + aperture_center_x: trial.aperture_center_x, + aperture_center_y: trial.aperture_center_y, + fixation_cross: trial.fixation_cross, + fixation_cross_width: trial.fixation_cross_width, + fixation_cross_height: trial.fixation_cross_height, + fixation_cross_color: trial.fixation_cross_color, + fixation_cross_thickness: trial.fixation_cross_thickness, + border: trial.border, + border_thickness: trial.border_thickness, + border_color: trial.border_color, + canvas_width: canvasWidth, + canvas_height: canvasHeight } //Remove the canvas as the child of the display_element element @@ -597,12 +600,14 @@ jsPsych.plugins["rdk"] = (function() { if(trial.correct_choice.constructor === Array){ //If it is an array //If the elements are characters if(typeof trial.correct_choice[0] === 'string' || trial.correct_choice[0] instanceof String){ - trial.correct_choice = trial.correct_choice.map(function(x){return x.toUpperCase();}); //Convert all the values to upper case - return trial.correct_choice.includes(String.fromCharCode(response.key)); //If the response is included in the correct_choice array, return true. Else, return false. + var key_in_choices = trial.correct_choice.every(function(x) { + return jsPsych.pluginAPI.compareKeys(x,response.key); + }); + return key_in_choices; //If the response is included in the correct_choice array, return true. Else, return false. } //Else if the elements are numbers (javascript character codes) else if (typeof trial.correct_choice[0] === 'number'){ - return trial.correct_choice.includes(response.key); //If the response is included in the correct_choice array, return true. Else, return false. + console.error('Error in RDK plugin: correct_choice value must be a string.'); } } //Else compare the char with the response key @@ -610,12 +615,11 @@ jsPsych.plugins["rdk"] = (function() { //If the element is a character if(typeof trial.correct_choice === 'string' || trial.correct_choice instanceof String){ //Return true if the user's response matches the correct answer. Return false otherwise. - return response.key == trial.correct_choice.toUpperCase().charCodeAt(0); + return jsPsych.pluginAPI.compareKeys(response.key, trial.correct_choice); } //Else if the element is a number (javascript character codes) else if (typeof trial.correct_choice === 'number'){ - console.log(response.key == trial.correct_choice); - return response.key == trial.correct_choice; + console.error('Error in RDK plugin: correct_choice value must be a string.'); } } } diff --git a/plugins/jspsych-reconstruction.js b/plugins/jspsych-reconstruction.js index 28dc90db2d..fb0eabd2e2 100644 --- a/plugins/jspsych-reconstruction.js +++ b/plugins/jspsych-reconstruction.js @@ -37,13 +37,13 @@ jsPsych.plugins['reconstruction'] = (function() { description: 'The change in the stimulus parameter caused by pressing one of the modification keys.' }, key_increase: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key increase', default: 'h', description: 'The key to press for increasing the parameter value.' }, key_decrease: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Key decrease', default: 'g', description: 'The key to press for decreasing the parameter value.' @@ -67,13 +67,13 @@ jsPsych.plugins['reconstruction'] = (function() { //console.log('fire'); - var key_i = (typeof trial.key_increase == 'string') ? jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.key_increase) : trial.key_increase; - var key_d = (typeof trial.key_decrease == 'string') ? jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.key_decrease) : trial.key_decrease; + var key_i = trial.key_increase; + var key_d = trial.key_decrease; // get new param value - if (info.key == key_i) { + if (jsPsych.pluginAPI.compareKeys(info.key, key_i)) { param = param + trial.step_size; - } else if (info.key == key_d) { + } else if (jsPsych.pluginAPI.compareKeys(info.key, key_d)) { param = param - trial.step_size; } param = Math.max(Math.min(1, param), 0); @@ -115,9 +115,9 @@ jsPsych.plugins['reconstruction'] = (function() { // save data var trial_data = { - "rt": response_time, - "final_value": param, - "start_value": trial.starting_value + rt: response_time, + final_value: param, + start_value: trial.starting_value }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-resize.js b/plugins/jspsych-resize.js index ae417339e0..37c5c338fd 100644 --- a/plugins/jspsych-resize.js +++ b/plugins/jspsych-resize.js @@ -153,9 +153,9 @@ jsPsych.plugins["resize"] = (function() { // finishes trial var trial_data = { - 'final_height_px': final_height_px, - 'final_width_px': final_width_px, - 'scale_factor': scale_factor + final_height_px: final_height_px, + final_width_px: final_width_px, + scale_factor: scale_factor } jsPsych.finishTrial(trial_data); diff --git a/plugins/jspsych-same-different-html.js b/plugins/jspsych-same-different-html.js index 1db6da1c16..3695331b38 100644 --- a/plugins/jspsych-same-different-html.js +++ b/plugins/jspsych-same-different-html.js @@ -27,26 +27,26 @@ jsPsych.plugins['same-different-html'] = (function() { type: jsPsych.plugins.parameterType.SELECT, pretty_name: 'Answer', options: ['same', 'different'], - default: 75, + default: undefined, description: 'Either "same" or "different".' }, same_key: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Same key', - default: 'Q', + default: 'q', description: '' }, different_key: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Different key', - default: 'P', + default: 'p', description: 'The key that subjects should press to indicate that the two stimuli are the same.' }, first_stim_duration: { type: jsPsych.plugins.parameterType.INT, pretty_name: 'First stimulus duration', default: 1000, - description: 'How long to show the first stimulus for in milliseconds.' + description: 'How long to show the first stimulus for in milliseconds. If null, then the stimulus will remain on the screen until any keypress is made.' }, gap_duration: { type: jsPsych.plugins.parameterType.INT, @@ -58,7 +58,7 @@ jsPsych.plugins['same-different-html'] = (function() { type: jsPsych.plugins.parameterType.INT, pretty_name: 'Second stimulus duration', default: 1000, - description: 'How long to show the second stimulus for in milliseconds.' + description: 'How long to show the second stimulus for in milliseconds. If null, then the stimulus will remain on the screen until a valid response is made.' }, prompt: { type: jsPsych.plugins.parameterType.STRING, @@ -124,27 +124,27 @@ jsPsych.plugins['same-different-html'] = (function() { var correct = false; - var skey = typeof trial.same_key == 'string' ? jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.same_key) : trial.same_key; - var dkey = typeof trial.different_key == 'string' ? jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.different_key) : trial.different_key; + var skey = trial.same_key; + var dkey = trial.different_key; - if (info.key == skey && trial.answer == 'same') { + if (jsPsych.pluginAPI.compareKeys(info.key, skey) && trial.answer == 'same') { correct = true; } - if (info.key == dkey && trial.answer == 'different') { + if (jsPsych.pluginAPI.compareKeys(info.key, dkey) && trial.answer == 'different') { correct = true; } var trial_data = { - "rt": info.rt, - "answer": trial.answer, - "correct": correct, - "stimulus": JSON.stringify([trial.stimuli[0], trial.stimuli[1]]), - "key_press": info.key + rt: info.rt, + answer: trial.answer, + correct: correct, + stimulus: [trial.stimuli[0], trial.stimuli[1]], + response: info.key }; if (first_stim_info) { trial_data["rt_stim1"] = first_stim_info.rt; - trial_data["key_press_stim1"] = first_stim_info.key; + trial_data["response_stim1"] = first_stim_info.key; } display_element.innerHTML = ''; diff --git a/plugins/jspsych-same-different-image.js b/plugins/jspsych-same-different-image.js index a239d22804..39b3f9da8c 100644 --- a/plugins/jspsych-same-different-image.js +++ b/plugins/jspsych-same-different-image.js @@ -29,26 +29,26 @@ jsPsych.plugins['same-different-image'] = (function() { type: jsPsych.plugins.parameterType.SELECT, pretty_name: 'Answer', options: ['same', 'different'], - default: 75, + default: undefined, description: 'Either "same" or "different".' }, same_key: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Same key', - default: 'Q', + default: 'q', description: '' }, different_key: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Different key', - default: 'P', + default: 'p', description: 'The key that subjects should press to indicate that the two stimuli are the same.' }, first_stim_duration: { type: jsPsych.plugins.parameterType.INT, pretty_name: 'First stimulus duration', default: 1000, - description: 'How long to show the first stimulus for in milliseconds.' + description: 'How long to show the first stimulus for in milliseconds. If null, then the stimulus will remain on the screen until any keypress is made.' }, gap_duration: { type: jsPsych.plugins.parameterType.INT, @@ -60,7 +60,7 @@ jsPsych.plugins['same-different-image'] = (function() { type: jsPsych.plugins.parameterType.INT, pretty_name: 'Second stimulus duration', default: 1000, - description: 'How long to show the second stimulus for in milliseconds.' + description: 'How long to show the second stimulus for in milliseconds. If null, then the stimulus will remain on the screen until a valid response is made.' }, prompt: { type: jsPsych.plugins.parameterType.STRING, @@ -125,27 +125,27 @@ jsPsych.plugins['same-different-image'] = (function() { var correct = false; - var skey = typeof trial.same_key == 'string' ? jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.same_key) : trial.same_key; - var dkey = typeof trial.different_key == 'string' ? jsPsych.pluginAPI.convertKeyCharacterToKeyCode(trial.different_key) : trial.different_key; + var skey = trial.same_key; + var dkey = trial.different_key; - if (info.key == skey && trial.answer == 'same') { + if (jsPsych.pluginAPI.compareKeys(info.key,skey) && trial.answer == 'same') { correct = true; } - if (info.key == dkey && trial.answer == 'different') { + if (jsPsych.pluginAPI.compareKeys(info.key, dkey) && trial.answer == 'different') { correct = true; } var trial_data = { - "rt": info.rt, - "answer": trial.answer, - "correct": correct, - "stimulus": JSON.stringify([trial.stimuli[0], trial.stimuli[1]]), - "key_press": info.key + rt: info.rt, + answer: trial.answer, + correct: correct, + stimulus: [trial.stimuli[0], trial.stimuli[1]], + response: info.key }; if (first_stim_info) { trial_data["rt_stim1"] = first_stim_info.rt; - trial_data["key_press_stim1"] = first_stim_info.key; + trial_data["response_stim1"] = first_stim_info.key; } display_element.innerHTML = ''; diff --git a/plugins/jspsych-serial-reaction-time-mouse.js b/plugins/jspsych-serial-reaction-time-mouse.js index dd1000315c..85eb3eb57e 100644 --- a/plugins/jspsych-serial-reaction-time-mouse.js +++ b/plugins/jspsych-serial-reaction-time-mouse.js @@ -46,7 +46,7 @@ jsPsych.plugins["serial-reaction-time-mouse"] = (function() { type: jsPsych.plugins.parameterType.BOOL, pretty_name: 'Response ends trial', default: true, - description: 'If true, the trial ends after a key press.' + description: 'If true, the trial ends after a mouse click.' }, pre_target_duration: { type: jsPsych.plugins.parameterType.INT, @@ -105,7 +105,7 @@ jsPsych.plugins["serial-reaction-time-mouse"] = (function() { //show prompt if there is one if (trial.prompt !== null) { - display_element.innerHTML += trial.prompt; + display_element.insertAdjacentHTML('beforeend', trial.prompt); } function showTarget(){ @@ -151,12 +151,11 @@ jsPsych.plugins["serial-reaction-time-mouse"] = (function() { // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "grid": JSON.stringify(trial.grid), - "target": JSON.stringify(trial.target), - "response_row": response.row, - "response_column": response.column, - "correct": response.row == trial.target[0] && response.column == trial.target[1] + rt: response.rt, + grid: trial.grid, + target: trial.target, + response: [parseInt(response.row,10), parseInt(response.column,10)], + correct: response.row == trial.target[0] && response.column == trial.target[1] }; // clear the display diff --git a/plugins/jspsych-serial-reaction-time.js b/plugins/jspsych-serial-reaction-time.js index bd9d1748cb..4b390bfc3a 100644 --- a/plugins/jspsych-serial-reaction-time.js +++ b/plugins/jspsych-serial-reaction-time.js @@ -31,7 +31,7 @@ jsPsych.plugins["serial-reaction-time"] = (function() { description: 'The location of the target. The array should be the [row, column] of the target.' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Choices', array: true, default: [['3','5','7','9']], @@ -171,11 +171,11 @@ jsPsych.plugins["serial-reaction-time"] = (function() { // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "key_press": response.key, - "correct": response.correct, - "grid": JSON.stringify(trial.grid), - "target": JSON.stringify(trial.target) + rt: response.rt, + response: response.key, + correct: response.correct, + grid: trial.grid, + target: trial.target }; // clear the display @@ -196,8 +196,8 @@ jsPsych.plugins["serial-reaction-time"] = (function() { var responseLoc = []; for(var i=0; i'; } // start form - html += '
' + if ( trial.autocomplete ) { + html += '' + } else { + html += '' + } // add form HTML / input elements html += trial.html; @@ -59,9 +75,20 @@ jsPsych.plugins['survey-html-form'] = (function() { // add submit button html += ''; - html += '
' + html += ''; display_element.innerHTML = html; + if ( trial.autofocus !== '' ) { + var focus_elements = display_element.querySelectorAll('#'+trial.autofocus); + if ( focus_elements.length === 0 ) { + console.warn('No element found with id: '+trial.autofocus); + } else if ( focus_elements.length > 1 ) { + console.warn('The id "'+trial.autofocus+'" is not unique so autofocus will not work.'); + } else { + focus_elements[0].focus(); + } + } + display_element.querySelector('#jspsych-survey-html-form').addEventListener('submit', function(event) { // don't submit form event.preventDefault(); @@ -78,8 +105,8 @@ jsPsych.plugins['survey-html-form'] = (function() { // save data var trialdata = { - "rt": response_time, - "responses": JSON.stringify(question_data) + rt: response_time, + response: question_data }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-survey-likert.js b/plugins/jspsych-survey-likert.js index a2e5f9b294..f4f9d94874 100644 --- a/plugins/jspsych-survey-likert.js +++ b/plugins/jspsych-survey-likert.js @@ -71,6 +71,12 @@ jsPsych.plugins['survey-likert'] = (function() { pretty_name: 'Button label', default: 'Continue', description: 'Label of the button.' + }, + autocomplete: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Allow autocomplete', + default: false, + description: "Setting this to true will enable browser auto-complete or auto-fill for the form." } } } @@ -99,7 +105,12 @@ jsPsych.plugins['survey-likert'] = (function() { if(trial.preamble !== null){ html += '
'+trial.preamble+'
'; } - html += '
'; + + if ( trial.autocomplete ) { + html += ''; + } else { + html += ''; + } // add likert scale questions /// // generate question order. this is randomized here as opposed to randomizing the order of trial.questions @@ -120,11 +131,11 @@ jsPsych.plugins['survey-likert'] = (function() { var width = 100 / question.labels.length; var options_string = '
    '; for (var j = 0; j < question.labels.length; j++) { - options_string += '
  • '; + options_string += '>' + question.labels[j] + ''; } options_string += '
'; html += options_string; @@ -166,9 +177,9 @@ jsPsych.plugins['survey-likert'] = (function() { // save data var trial_data = { - "rt": response_time, - "responses": JSON.stringify(question_data), - "question_order": JSON.stringify(question_order) + rt: response_time, + response: question_data, + question_order: question_order }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-survey-multi-choice.js b/plugins/jspsych-survey-multi-choice.js index eb58c9f59b..acdc1397a9 100644 --- a/plugins/jspsych-survey-multi-choice.js +++ b/plugins/jspsych-survey-multi-choice.js @@ -71,6 +71,12 @@ jsPsych.plugins['survey-multi-choice'] = (function() { pretty_name: 'Button label', default: 'Continue', description: 'Label of the button.' + }, + autocomplete: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Allow autocomplete', + default: false, + description: "Setting this to true will enable browser auto-complete or auto-fill for the form." } } } @@ -95,8 +101,11 @@ jsPsych.plugins['survey-multi-choice'] = (function() { } // form element - html += ''; - + if ( trial.autocomplete ) { + html += ''; + } else { + html += ''; + } // generate question order. this is randomized here as opposed to randomizing the order of trial.questions // so that the data are always associated with the same question regardless of order var question_order = []; @@ -140,8 +149,9 @@ jsPsych.plugins['survey-multi-choice'] = (function() { // add radio button container html += '
'; - html += ''; + html += ''; html += '
'; } @@ -181,9 +191,9 @@ jsPsych.plugins['survey-multi-choice'] = (function() { } // save data var trial_data = { - "rt": response_time, - "responses": JSON.stringify(question_data), - "question_order": JSON.stringify(question_order) + rt: response_time, + response: question_data, + question_order: question_order }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-survey-multi-select.js b/plugins/jspsych-survey-multi-select.js index 3408f355e3..d50efc26ec 100644 --- a/plugins/jspsych-survey-multi-select.js +++ b/plugins/jspsych-survey-multi-select.js @@ -75,6 +75,12 @@ jsPsych.plugins['survey-multi-select'] = (function() { pretty_name: 'Required message', default: 'You must choose at least one response for this question', description: 'Message that will be displayed if required question is not answered.' + }, + autocomplete: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Allow autocomplete', + default: false, + description: "Setting this to true will enable browser auto-complete or auto-fill for the form." } } } @@ -99,6 +105,9 @@ jsPsych.plugins['survey-multi-select'] = (function() { var trial_form_id = _join(plugin_id_name, "form"); display_element.innerHTML += '
'; var trial_form = display_element.querySelector("#" + trial_form_id); + if ( !trial.autocomplete ) { + trial_form.setAttribute('autocomplete',"off"); + } // show preamble text var preamble_id_name = _join(plugin_id_name, 'preamble'); if(trial.preamble !== null){ @@ -146,14 +155,14 @@ jsPsych.plugins['survey-multi-select'] = (function() { label.innerHTML = question.options[j]; label.setAttribute('for', input_id) - // create checkboxes + // create checkboxes var input = document.createElement('input'); input.setAttribute('type', "checkbox"); input.setAttribute('name', input_name); input.setAttribute('id', input_id); input.setAttribute('value', question.options[j]) form.appendChild(label) - form.insertBefore(input, label) + label.insertBefore(input, label.firstChild) } } // add submit button @@ -205,9 +214,9 @@ jsPsych.plugins['survey-multi-select'] = (function() { // save data var trial_data = { - "rt": response_time, - "responses": JSON.stringify(question_data), - "question_order": JSON.stringify(question_order) + rt: response_time, + response: question_data, + question_order: question_order }; display_element.innerHTML = ''; diff --git a/plugins/jspsych-survey-text.js b/plugins/jspsych-survey-text.js index 90e108599d..762cc30eae 100644 --- a/plugins/jspsych-survey-text.js +++ b/plugins/jspsych-survey-text.js @@ -31,7 +31,7 @@ jsPsych.plugins['survey-text'] = (function() { }, placeholder: { type: jsPsych.plugins.parameterType.STRING, - pretty_name: 'Value', + pretty_name: 'Placeholder', default: "", description: 'Placeholder text in the textfield.' }, @@ -72,6 +72,12 @@ jsPsych.plugins['survey-text'] = (function() { pretty_name: 'Button label', default: 'Continue', description: 'The text that appears on the button to finish the trial.' + }, + autocomplete: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Allow autocomplete', + default: false, + description: "Setting this to true will enable browser auto-complete or auto-fill for the form." } } } @@ -100,8 +106,11 @@ jsPsych.plugins['survey-text'] = (function() { html += '
'+trial.preamble+'
'; } // start form - html += '
' - + if (trial.autocomplete) { + html += ''; + } else { + html += ''; + } // generate question order var question_order = []; for(var i=0; i -1){ file_name = file_name.substring(0, file_name.indexOf('?')); } var type = file_name.substr(file_name.lastIndexOf('.') + 1); type = type.toLowerCase(); + if (type == "mov") { + console.warn('Warning: video-button-response plugin does not reliably support .mov files.') + } video_html+=''; } } @@ -170,7 +187,7 @@ jsPsych.plugins["video-button-response"] = (function() { video_html += '
'; for (var i = 0; i < trial.choices.length; i++) { var str = buttons[i].replace(/%choice%/g, trial.choices[i]); - video_html += '
'+str+'
'; + video_html += '
'+str+'
'; } video_html += '
'; @@ -183,37 +200,53 @@ jsPsych.plugins["video-button-response"] = (function() { var start_time = performance.now(); + var video_element = display_element.querySelector('#jspsych-video-button-response-stimulus'); + if(video_preload_blob){ - display_element.querySelector('#jspsych-video-button-response-stimulus').src = video_preload_blob; + video_element.src = video_preload_blob; } - display_element.querySelector('#jspsych-video-button-response-stimulus').onended = function(){ + video_element.onended = function(){ if(trial.trial_ends_after_video){ end_trial(); + } else if (!trial.response_allowed_while_playing) { + enable_buttons(); } } + video_element.playbackRate = trial.rate; + + // if video start time is specified, hide the video and set the starting time + // before showing and playing, so that the video doesn't automatically show the first frame if(trial.start !== null){ - display_element.querySelector('#jspsych-video-button-response-stimulus').currentTime = trial.start; + video_element.pause(); + video_element.currentTime = trial.start; + video_element.onseeked = function() { + video_element.style.visibility = "visible"; + if (trial.autoplay) { + video_element.play(); + } + } } + var stopped = false; if(trial.stop !== null){ - display_element.querySelector('#jspsych-video-button-response-stimulus').addEventListener('timeupdate', function(e){ - var currenttime = display_element.querySelector('#jspsych-video-button-response-stimulus').currentTime; + video_element.addEventListener('timeupdate', function(e){ + var currenttime = video_element.currentTime; if(currenttime >= trial.stop){ - display_element.querySelector('#jspsych-video-button-response-stimulus').pause(); + video_element.pause(); + if (trial.trial_ends_after_video && !(stopped)) { + stopped = true; // this is to prevent end_trial from being called twice, because the timeupdate event can fire in quick succession + end_trial(); + } } }) } - display_element.querySelector('#jspsych-video-button-response-stimulus').playbackRate = trial.rate; - - // add event listeners to buttons - for (var i = 0; i < trial.choices.length; i++) { - display_element.querySelector('#jspsych-video-button-response-button-' + i).addEventListener('click', function(e){ - var choice = e.currentTarget.getAttribute('data-choice'); // don't use dataset for jsdom compatibility - after_response(choice); - }); + if(trial.response_allowed_while_playing){ + enable_buttons(); + } else { + disable_buttons(); } // store response @@ -228,11 +261,16 @@ jsPsych.plugins["video-button-response"] = (function() { // kill any remaining setTimeout handlers jsPsych.pluginAPI.clearAllTimeouts(); + // stop the video file if it is playing + // remove any remaining end event handlers + display_element.querySelector('#jspsych-video-button-response-stimulus').pause(); + display_element.querySelector('#jspsych-video-button-response-stimulus').onended = function() {}; + // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "button_pressed": response.button + rt: response.rt, + stimulus: trial.stimulus, + response: response.button }; // clear the display @@ -240,7 +278,7 @@ jsPsych.plugins["video-button-response"] = (function() { // move on to the next trial jsPsych.finishTrial(trial_data); - }; + } // function to handle responses by the subject function after_response(choice) { @@ -248,24 +286,47 @@ jsPsych.plugins["video-button-response"] = (function() { // measure rt var end_time = performance.now(); var rt = end_time - start_time; - response.button = choice; + response.button = parseInt(choice); response.rt = rt; // after a valid response, the stimulus will have the CSS class 'responded' // which can be used to provide visual feedback that a response was recorded - display_element.querySelector('#jspsych-video-button-response-stimulus').className += ' responded'; + video_element.className += ' responded'; // disable all the buttons after a response - var btns = document.querySelectorAll('.jspsych-video-button-response-button button'); - for(var i=0; i -1){ file_name = file_name.substring(0, file_name.indexOf('?')); } var type = file_name.substr(file_name.lastIndexOf('.') + 1); type = type.toLowerCase(); + if (type == "mov") { + console.warn('Warning: video-keyboard-response plugin does not reliably support .mov files.') + } video_html+=''; } } @@ -142,31 +159,57 @@ jsPsych.plugins["video-keyboard-response"] = (function() { display_element.innerHTML = video_html; + var video_element = display_element.querySelector('#jspsych-video-keyboard-response-stimulus'); + if(video_preload_blob){ - display_element.querySelector('#jspsych-video-keyboard-response-stimulus').src = video_preload_blob; + video_element.src = video_preload_blob; } - display_element.querySelector('#jspsych-video-keyboard-response-stimulus').onended = function(){ + video_element.onended = function(){ if(trial.trial_ends_after_video){ end_trial(); } + if ((trial.response_allowed_while_playing == false) & (!trial.trial_ends_after_video)) { + // start keyboard listener + var keyboardListener = jsPsych.pluginAPI.getKeyboardResponse({ + callback_function: after_response, + valid_responses: trial.choices, + rt_method: 'performance', + persist: false, + allow_held_key: false, + }); + } } + + video_element.playbackRate = trial.rate; + // if video start time is specified, hide the video and set the starting time + // before showing and playing, so that the video doesn't automatically show the first frame if(trial.start !== null){ - display_element.querySelector('#jspsych-video-keyboard-response-stimulus').currentTime = trial.start; + video_element.pause(); + video_element.currentTime = trial.start; + video_element.onseeked = function() { + video_element.style.visibility = "visible"; + if (trial.autoplay) { + video_element.play(); + } + } } + var stopped = false; if(trial.stop !== null){ - display_element.querySelector('#jspsych-video-keyboard-response-stimulus').addEventListener('timeupdate', function(e){ - var currenttime = display_element.querySelector('#jspsych-video-keyboard-response-stimulus').currentTime; + video_element.addEventListener('timeupdate', function(e){ + var currenttime = video_element.currentTime; if(currenttime >= trial.stop){ - display_element.querySelector('#jspsych-video-keyboard-response-stimulus').pause(); + video_element.pause(); + if (trial.trial_ends_after_video && !(stopped)) { + stopped = true; // this is to prevent end_trial from being called twice, because the timeupdate event can fire in quick succession + end_trial(); + } } }) } - display_element.querySelector('#jspsych-video-keyboard-response-stimulus').playbackRate = trial.rate; - // store response var response = { rt: null, @@ -181,12 +224,17 @@ jsPsych.plugins["video-keyboard-response"] = (function() { // kill keyboard listeners jsPsych.pluginAPI.cancelAllKeyboardResponses(); + + // stop the video file if it is playing + // remove end event listeners if they exist + display_element.querySelector('#jspsych-video-keyboard-response-stimulus').pause(); + display_element.querySelector('#jspsych-video-keyboard-response-stimulus').onended = function(){ }; // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "key_press": response.key + rt: response.rt, + stimulus: trial.stimulus, + response: response.key }; // clear the display @@ -194,7 +242,7 @@ jsPsych.plugins["video-keyboard-response"] = (function() { // move on to the next trial jsPsych.finishTrial(trial_data); - }; + } // function to handle responses by the subject var after_response = function(info) { @@ -214,7 +262,7 @@ jsPsych.plugins["video-keyboard-response"] = (function() { }; // start the response listener - if (trial.choices != jsPsych.NO_KEYS) { + if ((trial.choices != jsPsych.NO_KEYS) & (trial.response_allowed_while_playing)) { var keyboardListener = jsPsych.pluginAPI.getKeyboardResponse({ callback_function: after_response, valid_responses: trial.choices, diff --git a/plugins/jspsych-video-slider-response.js b/plugins/jspsych-video-slider-response.js index 067251d5e2..7a52cafad7 100644 --- a/plugins/jspsych-video-slider-response.js +++ b/plugins/jspsych-video-slider-response.js @@ -18,7 +18,7 @@ jsPsych.plugins["video-slider-response"] = (function() { name: 'video-slider-response', description: '', parameters: { - sources: { + stimulus: { type: jsPsych.plugins.parameterType.VIDEO, pretty_name: 'Video', default: undefined, @@ -139,14 +139,24 @@ jsPsych.plugins["video-slider-response"] = (function() { pretty_name: 'Response ends trial', default: true, description: 'If true, the trial will end when subject makes a response.' + }, + response_allowed_while_playing: { + type: jsPsych.plugins.parameterType.BOOL, + pretty_name: 'Response allowed while playing', + default: true, + description: 'If true, then responses are allowed while the video is playing. '+ + 'If false, then the video must finish playing before a response is accepted.' } } } plugin.trial = function(display_element, trial) { + // half of the thumb width value from jspsych.css, used to adjust the label positions + var half_thumb_width = 7.5; + // setup stimulus - var video_html = '"; var html = '
'; html += '
' + video_html + '
'; - html += '
'; - html += ''; - html += '
' + html += ''; + var label_width_perc = 100/(trial.labels.length-1); + var percent_of_range = j * (100/(trial.labels.length - 1)); + var percent_dist_from_center = ((percent_of_range-50)/50)*100; + var offset = (percent_dist_from_center * half_thumb_width)/100; + html += '
'; html += ''+trial.labels[j]+''; html += '
' } @@ -202,39 +230,61 @@ jsPsych.plugins["video-slider-response"] = (function() { } // add submit button - html += ''; + var next_disabled_attribute = ""; + if (trial.require_movement | !trial.response_allowed_while_playing) { + next_disabled_attribute = "disabled"; + } + html += ''; display_element.innerHTML = html; + var video_element = display_element.querySelector('#jspsych-video-slider-response-stimulus-video'); + if(video_preload_blob){ - display_element.querySelector('#jspsych-video-slider-response-stimulus').src = video_preload_blob; + video_element.src = video_preload_blob; } - display_element.querySelector('#jspsych-video-slider-response-stimulus').onended = function(){ + video_element.onended = function(){ if(trial.trial_ends_after_video){ end_trial(); + } else if (!trial.response_allowed_while_playing) { + enable_slider(); } } + video_element.playbackRate = trial.rate; + + // if video start time is specified, hide the video and set the starting time + // before showing and playing, so that the video doesn't automatically show the first frame if(trial.start !== null){ - display_element.querySelector('#jspsych-video-slider-response-stimulus').currentTime = trial.start; + video_element.pause(); + video_element.currentTime = trial.start; + video_element.onseeked = function() { + video_element.style.visibility = "visible"; + if (trial.autoplay) { + video_element.play(); + } + } } + var stopped = false; if(trial.stop !== null){ - display_element.querySelector('#jspsych-video-slider-response-stimulus').addEventListener('timeupdate', function(e){ - var currenttime = display_element.querySelector('#jspsych-video-slider-response-stimulus').currentTime; + video_element.addEventListener('timeupdate', function(e){ + var currenttime = video_element.currentTime; if(currenttime >= trial.stop){ - display_element.querySelector('#jspsych-video-slider-response-stimulus').pause(); + video_element.pause(); + if (trial.trial_ends_after_video && !(stopped)) { + stopped = true; // this is to prevent end_trial from being called twice, because the timeupdate event can fire in quick succession + end_trial(); + } } }) } - display_element.querySelector('#jspsych-video-slider-response-stimulus').playbackRate = trial.rate; - if(trial.require_movement){ - display_element.querySelector('#jspsych-video-slider-response-response').addEventListener('change', function(){ + display_element.querySelector('#jspsych-video-slider-response-response').addEventListener('click', function(){ display_element.querySelector('#jspsych-video-slider-response-next').disabled = false; - }) + }); } var startTime = performance.now(); @@ -249,7 +299,7 @@ jsPsych.plugins["video-slider-response"] = (function() { // measure response time var endTime = performance.now(); response.rt = endTime - startTime; - response.response = display_element.querySelector('#jspsych-video-slider-response-response').value; + response.response = display_element.querySelector('#jspsych-video-slider-response-response').valueAsNumber; if(trial.response_ends_trial){ end_trial(); @@ -265,11 +315,18 @@ jsPsych.plugins["video-slider-response"] = (function() { // kill any remaining setTimeout handlers jsPsych.pluginAPI.clearAllTimeouts(); + // stop the video file if it is playing + // remove any remaining end event handlers + display_element.querySelector('#jspsych-video-slider-response-stimulus-video').pause(); + display_element.querySelector('#jspsych-video-slider-response-stimulus-video').onended = function() {}; + // gather the data to store for the trial var trial_data = { - "rt": response.rt, - "stimulus": trial.stimulus, - "response": response.response + rt: response.rt, + stimulus: trial.stimulus, + start: trial.start, + slider_start: trial.slider_start, + response: response.response }; // clear the display @@ -279,6 +336,14 @@ jsPsych.plugins["video-slider-response"] = (function() { jsPsych.finishTrial(trial_data); }; + // function to enable slider after video ends + function enable_slider() { + document.querySelector('#jspsych-video-slider-response-response').disabled = false; + if (!trial.require_movement) { + document.querySelector('#jspsych-video-slider-response-next').disabled = false; + } + } + // end trial if time limit is set if (trial.trial_duration !== null) { jsPsych.pluginAPI.setTimeout(function() { diff --git a/plugins/jspsych-virtual-chinrest.js b/plugins/jspsych-virtual-chinrest.js new file mode 100644 index 0000000000..0875461da6 --- /dev/null +++ b/plugins/jspsych-virtual-chinrest.js @@ -0,0 +1,471 @@ +/* + * virtual chinrest plugin for jsPsych, based on Qisheng Li 11/2019. /// https://github.com/QishengLi/virtual_chinrest + + Modified by Gustavo Juantorena 08/2020 // https://github.com/GEJ1 + + Contributions from Peter J. Kohler: https://github.com/pjkohler + */ + +jsPsych.plugins["virtual-chinrest"] = (function () { + var plugin = {}; + + plugin.info = { + name: "virtual-chinrest", + parameters: { + resize_units: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: "Resize units", + default: "none", + description: + 'What units to resize to? ["none"/"cm"/"inch"/"deg"]. If "none", no resizing will be done to the jsPsych content after this trial.', + }, + pixels_per_unit: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: "Pixels per unit", + default: 100, + description: + "After the scaling factor is applied, this many pixels will equal one unit of measurement.", + }, + // mouse_adjustment: { + // type: jsPsych.plugins.parameterType.BOOL, + // pretty_name: "Adjust Using Mouse?", + // default: true, + // }, + adjustment_prompt: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: "Adjustment prompt", + default: ` +
+

Click and drag the lower right corner of the image until it is the same size as a credit card held up to the screen.

+

You can use any card that is the same size as a credit card, like a membership card or driver's license.

+

If you do not have access to a real card you can use a ruler to measure the image width to 3.37 inches or 85.6 mm.

+
`, + description: + "Any content here will be displayed above the card stimulus.", + }, + adjustment_button_prompt: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: "Adjustment button prompt", + default: "Click here when the image is the correct size", + description: + " Content of the button displayed below the card stimulus.", + }, + item_path: { + type: jsPsych.plugins.parameterType.STRING, + pretty_name: "Item path", + default: "img/card.png", + description: "Path to an image to be shown in the resizable item div." + }, + item_height_mm: { + type: jsPsych.plugins.parameterType.FLOAT, + pretty_name: "Item height (mm)", + default: 53.98, + description: "The height of the item to be measured, in mm.", + }, + item_width_mm: { + type: jsPsych.plugins.parameterType.FLOAT, + pretty_name: "Item width (mm)", + default: 85.6, + description: "The width of the item to be measured, in mm.", + }, + item_init_size: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: "Initial Size", + default: 250, + description: + "The initial size of the card, in pixels, along the largest dimension.", + }, + blindspot_reps: { + type: jsPsych.plugins.parameterType.INT, + pretty_name: "Blindspot measurement repetitions", + default: 5, + description: + "How many times to measure the blindspot location? If 0, blindspot will not be detected, and viewing distance and degree data not computed.", + }, + blindspot_prompt: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: "Blindspot prompt", + default: ` +

Now we will quickly measure how far away you are sitting.

+
+
    +
  1. Put your left hand on the space bar.
  2. +
  3. Cover your right eye with your right hand.
  4. +
  5. Using your left eye, focus on the black square. Keep your focus on the black square.
  6. +
  7. The red ball will disappear as it moves from right to left. Press the space bar as soon as the ball disappears.
  8. +
+
+

Press the space bar when you are ready to begin.

+ `, + description: "HTML-formatted prompt to be shown on the screen during blindspot estimates." + }, + // blindspot_start_prompt: { + // type: jsPsych.plugins.parameterType.HTML_STRING, + // pretty_name: "Blindspot start prompt", + // default: "Start", + // description: "Content of the start button for the blindspot tasks.", + // }, + blindspot_measurements_prompt: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: "Blindspot measurements prompt", + default: "Remaining measurements: ", + description: "Text accompanying the remaining measures counter", + }, + viewing_distance_report: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: "Viewing distance report", + default: "

Based on your responses, you are sitting about from the screen.

Does that seem about right?

", + description: + 'If "none" is given, viewing distance will not be reported to the participant', + }, + redo_measurement_button_label: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: "Re-do measurement button label", + default: 'No, that is not close. Try again.', + description: "Label for the button that can be clicked on the viewing distance report screen to re-do the blindspot estimate(s)." + }, + blindspot_done_prompt: { + type: jsPsych.plugins.parameterType.HTML_STRING, + pretty_name: "Blindspot done prompt", + default: "Yes", + description: "Label for the button that can be clicked on the viewing distance report screen to accept the viewing distance estimate.", + }, + }, + }; + + plugin.trial = function (display_element, trial) { + /* check parameter compatibility */ + if (!(trial.blindspot_reps > 0) && (trial.resize_units == "deg" || trial.resize_units == "degrees")) { + console.error("Blindspot repetitions set to 0, so resizing to degrees of visual angle is not possible!"); + return; + } + + /* some additional parameter configuration */ + let trial_data = { + item_width_mm: trial.item_width_mm, + item_height_mm: trial.item_height_mm, //card dimension: 85.60 × 53.98 mm (3.370 × 2.125 in) + }; + + let blindspot_config_data = { + ball_pos: [], + slider_clck: false, + }; + + let aspect_ratio = trial.item_width_mm / trial.item_height_mm; + + const start_div_height = + aspect_ratio < 1 + ? trial.item_init_size + : Math.round(trial.item_init_size / aspect_ratio); + const start_div_width = + aspect_ratio < 1 + ? Math.round(trial.item_init_size * aspect_ratio) + : trial.item_init_size; + const adjust_size = Math.round(start_div_width * 0.1); + + /* create content for first screen, resizing card */ + let pagesize_content = ` +
+
+
+
+
+ ${trial.adjustment_prompt} + +
+ ` + + /* create content for second screen, blind spot */ + let blindspot_content = ` +
+ ${trial.blindspot_prompt} +
+ + ${trial.blindspot_measurements_prompt} +
${trial.blindspot_reps}
+
` + + /* create content for final report screen */ + let report_content = ` +
+
+ ${trial.viewing_distance_report} +
+ + +
+ ` + + display_element.innerHTML = `
` + + const start_time = performance.now(); + startResizePhase(); + + function startResizePhase() { + display_element.querySelector('#content').innerHTML = pagesize_content; + + // Event listeners for mouse-based resize + let dragging = false; + let origin_x, origin_y; + let cx, cy; + const scale_div = display_element.querySelector("#item"); + + function mouseupevent() { + dragging = false; + }; + document.addEventListener("mouseup", mouseupevent); + + function mousedownevent(e) { + e.preventDefault(); + dragging = true; + origin_x = e.pageX; + origin_y = e.pageY; + cx = parseInt(scale_div.style.width); + cy = parseInt(scale_div.style.height); + }; + display_element.querySelector("#jspsych-resize-handle").addEventListener("mousedown", mousedownevent); + + function resizeevent(e) { + if (dragging) { + let dx = e.pageX - origin_x; + let dy = e.pageY - origin_y; + + if (Math.abs(dx) >= Math.abs(dy)) { + scale_div.style.width = + Math.round(Math.max(20, cx + dx * 2)) + "px"; + scale_div.style.height = + Math.round(Math.max(20, cx + dx * 2) / aspect_ratio) + "px"; + } else { + scale_div.style.height = + Math.round(Math.max(20, cy + dy * 2)) + "px"; + scale_div.style.width = + Math.round(aspect_ratio * Math.max(20, cy + dy * 2)) + "px"; + } + } + } + display_element.addEventListener("mousemove", resizeevent); + + display_element.querySelector("#end_resize_phase").addEventListener("click", finishResizePhase); + + } + + function finishResizePhase() { + // add item width info to data + const item_width_px = getScaledItemWidth(); + trial_data["item_width_px"] = Math.round(item_width_px); + const px2mm = convertPixelsToMM(item_width_px); + trial_data["px2mm"] = accurateRound(px2mm, 2); + // check what to do next + if (trial.blindspot_reps > 0) { + startBlindSpotPhase(); + } else { + endTrial(); + } + } + + function startBlindSpotPhase() { + // reset the config data in case we are redoing the measurement + blindspot_config_data = { + ball_pos: [], + slider_clck: false, + }; + // add the content to the page + document.querySelector("#content").innerHTML = blindspot_content; + // draw the ball and fixation square + drawBall(); + // wait for a spacebar to begin the animations + jsPsych.pluginAPI.getKeyboardResponse({ + callback_function: startBall, + valid_responses: [' '], + rt_method: 'performance', + allow_held_keys: false, + persist: false + }) + } + + function startBall() { + ball_position_listener = jsPsych.pluginAPI.getKeyboardResponse({ + callback_function: recordPosition, + valid_responses: [' '], + rt_method: 'performance', + allow_held_keys: false, + persist: true + }); + animateBall(); + } + + function finishBlindSpotPhase() { + ball.stop(); + + jsPsych.pluginAPI.cancelAllKeyboardResponses(); + + if(trial.viewing_distance_report == 'none'){ + endTrial(); + } else { + showReport(); + } + } + + function showReport() { + // Display data + display_element.querySelector("#content").innerHTML = report_content; + display_element.querySelector('#distance-estimate').innerHTML = ` + ${Math.round(trial_data["view_dist_mm"] / 10)} cm (${Math.round(trial_data["view_dist_mm"]*0.0393701)} inches) + ` + + display_element.querySelector("#redo_blindspot").addEventListener('click', startBlindSpotPhase) + display_element.querySelector("#proceed").addEventListener('click', endTrial); + } + + function computeTransformation() { + trial_data.item_width_deg = + (2 * + Math.atan( + trial_data["item_width_mm"] / 2 / trial_data["view_dist_mm"] + ) * + 180) / + Math.PI; + trial_data.px2deg = + trial_data["item_width_px"] / trial_data.item_width_deg; // size of item in pixels divided by size of item in degrees of visual angle + + let px2unit_scr = 0; + switch (trial.resize_units) { + case "cm": + case "centimeters": + px2unit_scr = trial_data["px2mm"] * 10; // pixels per centimeter + break; + case "inch": + case "inches": + px2unit_scr = trial_data["px2mm"] * 25.4; // pixels per inch + break; + case "deg": + case "degrees": + px2unit_scr = trial_data["px2deg"]; // pixels per degree of visual angle + break; + } + if (px2unit_scr > 0) { + // scale the window + scale_factor = px2unit_scr / trial.pixels_per_unit; + document.getElementById("jspsych-content").style.transform = + "scale(" + scale_factor + ")"; + // pixels have been scaled, so pixels per degree, pixels per mm and pixels per item_width needs to be updated + trial_data.px2deg = trial_data.px2deg / scale_factor; + trial_data.px2mm = trial_data.px2mm / scale_factor; + trial_data.item_width_px = + trial_data.item_width_px / scale_factor; + trial_data.scale_factor = scale_factor; + } + + if (trial.blindspot_reps > 0) { + trial_data.win_width_deg = window.innerWidth / trial_data.px2deg; + trial_data.win_height_deg = + window.innerHeight / trial_data.px2deg; + } else { + // delete degree related properties + delete trial_data.px2deg; + delete trial_data.item_width_deg; + } + } + + function endTrial() { + + // finish trial + trial_data.rt = performance.now() - start_time; + + // remove lingering event listeners, just in case + jsPsych.pluginAPI.cancelAllKeyboardResponses(); + + // compute final data + computeTransformation(); + + // clear the display + display_element.innerHTML = ""; + + // finish the trial + jsPsych.finishTrial(trial_data); + + } + + function getScaledItemWidth() { + return document.querySelector('#item').getBoundingClientRect().width; + } + + function drawBall(pos = 180) { + // pos: define where the fixation square should be. + var mySVG = SVG("svgDiv"); + const rectX = trial_data["px2mm"] * pos; + const ballX = rectX * 0.6; // define where the ball is + var ball = mySVG.circle(30).move(ballX, 50).fill("#f00"); + window.ball = ball; + var square = mySVG.rect(30, 30).move(Math.min(rectX - 50, 950), 50); //square position + blindspot_config_data["square_pos"] = accurateRound(square.cx(), 2); + blindspot_config_data["rectX"] = rectX; + blindspot_config_data["ballX"] = ballX; + } + + function animateBall() { + ball + .animate(7000) + .during(function (pos) { + moveX = -pos * blindspot_config_data["ballX"]; + window.moveX = moveX; + moveY = 0; + ball.attr({ transform: "translate(" + moveX + "," + moveY + ")" }); //jqueryToVanilla: el.getAttribute(''); + }) + .loop(true, false) + .after(function () { + animateBall(); + }); + } + + function recordPosition() { + // angle: define horizontal blind spot entry point position in degrees. + const angle = 13.5; + + blindspot_config_data["ball_pos"].push(accurateRound(ball.cx() + moveX, 2)); + var sum = blindspot_config_data["ball_pos"].reduce((a, b) => a + b, 0); + var ballPosLen = blindspot_config_data["ball_pos"].length; + blindspot_config_data["avg_ball_pos"] = accurateRound(sum / ballPosLen, 2); + var ball_sqr_distance = + (blindspot_config_data["square_pos"] - blindspot_config_data["avg_ball_pos"]) / + trial_data["px2mm"]; + var viewDistance = ball_sqr_distance / Math.tan(Math.radians(angle)); + trial_data["view_dist_mm"] = accurateRound(viewDistance, 2); + + //counter and stop + var counter = Number(document.querySelector("#click").textContent); + counter = counter - 1; + document.querySelector("#click").textContent = Math.max(counter, 0); + if (counter <= 0) { + finishBlindSpotPhase(); + return; + } else { + ball.stop(); + animateBall(); + } + + } + + function convertPixelsToMM(item_width_px){ + const px2mm = item_width_px / trial_data["item_width_mm"]; + return px2mm; + } + + function accurateRound(value, decimals){ + return Number(Math.round(value+'e'+decimals)+'e-'+decimals); + } + + }; + + //helper function for radians + // Converts from degrees to radians. + Math.radians = function (degrees) { + return (degrees * Math.PI) / 180; + }; + + return plugin; +})(); diff --git a/plugins/jspsych-visual-search-circle.js b/plugins/jspsych-visual-search-circle.js index bb24aae9ca..85cf5d18b9 100644 --- a/plugins/jspsych-visual-search-circle.js +++ b/plugins/jspsych-visual-search-circle.js @@ -75,13 +75,13 @@ jsPsych.plugins["visual-search-circle"] = (function() { description: 'The diameter of the search array circle in pixels.' }, target_present_key: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Target present key', default: 'j', description: 'The key to press if the target is present in the search array.' }, target_absent_key: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Target absent key', default: 'f', description: 'The key to press if the target is not present in the search array.' @@ -179,8 +179,8 @@ jsPsych.plugins["visual-search-circle"] = (function() { var correct = false; - if (jsPsych.pluginAPI.compareKeys(info.key,trial.target_present_key) && trial.target_present || - jsPsych.pluginAPI.compareKeys(info.key,trial.target_absent_key) && !trial.target_present) { + if ((jsPsych.pluginAPI.compareKeys(info.key, trial.target_present_key)) && trial.target_present || + (jsPsych.pluginAPI.compareKeys(info.key, trial.target_absent_key)) && !trial.target_present) { correct = true; } @@ -234,8 +234,8 @@ jsPsych.plugins["visual-search-circle"] = (function() { var trial_data = { correct: correct, rt: rt, - key_press: key_press, - locations: JSON.stringify(display_locs), + response: key_press, + locations: display_locs, target_present: trial.target_present, set_size: trial.set_size }; diff --git a/plugins/jspsych-vsl-animate-occlusion.js b/plugins/jspsych-vsl-animate-occlusion.js index c539bffef9..e6b1794a34 100644 --- a/plugins/jspsych-vsl-animate-occlusion.js +++ b/plugins/jspsych-vsl-animate-occlusion.js @@ -29,7 +29,7 @@ jsPsych.plugins['vsl-animate-occlusion'] = (function() { description: 'A stimulus is a path to an image file.' }, choices: { - type: jsPsych.plugins.parameterType.KEYCODE, + type: jsPsych.plugins.parameterType.KEY, pretty_name: 'Choices', array: true, default: jsPsych.ALL_KEYS, @@ -184,8 +184,8 @@ jsPsych.plugins['vsl-animate-occlusion'] = (function() { jsPsych.pluginAPI.cancelKeyboardResponse(key_listener); var trial_data = { - "stimuli": JSON.stringify(trial.stimuli), - "responses": JSON.stringify(responses) + stimuli: trial.stimuli, + response: responses }; jsPsych.finishTrial(trial_data); diff --git a/plugins/jspsych-vsl-grid-scene.js b/plugins/jspsych-vsl-grid-scene.js index fe5b8210b8..6e6422b2a8 100644 --- a/plugins/jspsych-vsl-grid-scene.js +++ b/plugins/jspsych-vsl-grid-scene.js @@ -57,7 +57,7 @@ jsPsych.plugins['vsl-grid-scene'] = (function() { display_element.innerHTML = ''; var trial_data = { - "stimulus": JSON.stringify(trial.stimuli) + stimulus: trial.stimuli }; jsPsych.finishTrial(trial_data); diff --git a/plugins/jspsych-webgazer-calibrate.js b/plugins/jspsych-webgazer-calibrate.js new file mode 100644 index 0000000000..34050553cb --- /dev/null +++ b/plugins/jspsych-webgazer-calibrate.js @@ -0,0 +1,161 @@ +/** + * jspsych-webgazer-calibrate + * Josh de Leeuw + **/ + +jsPsych.plugins["webgazer-calibrate"] = (function() { + + var plugin = {}; + + plugin.info = { + name: 'webgazer-calibrate', + description: '', + parameters: { + calibration_points: { + type: jsPsych.plugins.parameterType.INT, + default: [[10,10], [10,50], [10,90], [50,10], [50,50], [50,90], [90,10], [90,50], [90,90]] + }, + calibration_mode: { + type: jsPsych.plugins.parameterType.STRING, + default: 'click', // options: 'click', 'view' + }, + point_size:{ + type: jsPsych.plugins.parameterType.INT, + default: 20 + }, + repetitions_per_point: { + type: jsPsych.plugins.parameterType.INT, + default: 1 + }, + randomize_calibration_order: { + type: jsPsych.plugins.parameterType.BOOL, + default: false + }, + time_to_saccade: { + type: jsPsych.plugins.parameterType.INT, + default: 1000 + }, + time_per_point: { + type: jsPsych.plugins.parameterType.STRING, + default: 1000 + } + } + } + + plugin.trial = function(display_element, trial) { + + var html = ` +
+
` + + display_element.innerHTML = html; + + var wg_container = display_element.querySelector('#webgazer-calibrate-container'); + + var reps_completed = 0; + var points_completed = -1; + var cal_points = null; + + calibrate(); + + function calibrate(){ + jsPsych.extensions['webgazer'].resume(); + if(trial.calibration_mode == 'click'){ + jsPsych.extensions['webgazer'].startMouseCalibration(); + } + next_calibration_round(); + } + + function next_calibration_round(){ + if(trial.randomize_calibration_order){ + cal_points = jsPsych.randomization.shuffle(trial.calibration_points); + } else { + cal_points = trial.calibration_points; + } + points_completed = -1; + next_calibration_point(); + } + + function next_calibration_point(){ + points_completed++; + if(points_completed == cal_points.length){ + reps_completed++; + if(reps_completed == trial.repetitions_per_point){ + calibration_done(); + } else { + next_calibration_round(); + } + } else { + var pt = cal_points[points_completed]; + calibration_display_gaze_only(pt); + } + } + + function calibration_display_gaze_only(pt){ + var pt_html = `
` + wg_container.innerHTML = pt_html; + + var pt_dom = wg_container.querySelector('#calibration-point'); + + if(trial.calibration_mode == 'click'){ + pt_dom.style.cursor = 'pointer'; + pt_dom.addEventListener('click', function(){ + next_calibration_point(); + }) + } + + if(trial.calibration_mode == 'view'){ + var br = pt_dom.getBoundingClientRect(); + var x = br.left + br.width / 2; + var y = br.top + br.height / 2; + + var pt_start_cal = performance.now() + trial.time_to_saccade; + var pt_finish = performance.now() + trial.time_to_saccade + trial.time_per_point; + + requestAnimationFrame(function watch_dot(){ + + if(performance.now() > pt_start_cal){ + jsPsych.extensions['webgazer'].calibratePoint(x,y,'click'); + } + if(performance.now() < pt_finish){ + requestAnimationFrame(watch_dot); + } else { + next_calibration_point(); + } + }) + } + } + + function calibration_done(){ + if(trial.calibration_mode == 'click'){ + jsPsych.extensions['webgazer'].stopMouseCalibration(); + } + wg_container.innerHTML = ""; + end_trial(); + } + + // function to end trial when it is time + function end_trial() { + jsPsych.extensions['webgazer'].pause(); + jsPsych.extensions['webgazer'].hidePredictions(); + jsPsych.extensions['webgazer'].hideVideo(); + + // kill any remaining setTimeout handlers + jsPsych.pluginAPI.clearAllTimeouts(); + + // gather the data to store for the trial + var trial_data = { + + }; + + // clear the display + display_element.innerHTML = ''; + + // move on to the next trial + jsPsych.finishTrial(trial_data); + }; + + }; + + return plugin; + })(); \ No newline at end of file diff --git a/plugins/jspsych-webgazer-init-camera.js b/plugins/jspsych-webgazer-init-camera.js new file mode 100644 index 0000000000..ac510580d6 --- /dev/null +++ b/plugins/jspsych-webgazer-init-camera.js @@ -0,0 +1,139 @@ +/** + * jspsych-webgazer-init-camera + * Josh de Leeuw + **/ + + jsPsych.plugins["webgazer-init-camera"] = (function () { + + var plugin = {}; + + plugin.info = { + name: 'webgazer-init-camera', + description: '', + parameters: { + instructions: { + type: jsPsych.plugins.parameterType.HTML_STRING, + default: ` +

Position your head so that the webcam has a good view of your eyes.

+

Center your face in the box and look directly towards the camera.

+

It is important that you try and keep your head reasonably still throughout the experiment, so please take a moment to adjust your setup to be comfortable.

+

When your face is centered in the box and the box is green, you can click to continue.

` + }, + button_text: { + type: jsPsych.plugins.parameterType.STRING, + default: 'Continue' + } + } + } + + plugin.trial = function (display_element, trial) { + + var start_time = performance.now(); + var load_time; + + if (!jsPsych.extensions.webgazer.isInitialized()) { + jsPsych.extensions.webgazer.start().then(function () { + showTrial(); + }).catch(function () { + display_element.innerHTML = `

The experiment cannot continue because the eye tracker failed to start.

+

This may be because of a technical problem or because you did not grant permission for the page to use your camera.

` + }); + } else { + showTrial(); + } + + function showTrial() { + + load_time = Math.round(performance.now() - start_time); + + var style = ` + + ` + document.querySelector('head').insertAdjacentHTML('beforeend', style); + + var html = ` +
+
` + + display_element.innerHTML = html; + + jsPsych.extensions['webgazer'].showVideo(); + jsPsych.extensions['webgazer'].resume(); + + var wg_container = display_element.querySelector('#webgazer-init-container'); + + + wg_container.innerHTML = ` +
+ ${trial.instructions} + +
` + + if(is_face_detect_green()){ + document.querySelector('#jspsych-wg-cont').disabled = false; + } else { + var observer = new MutationObserver(face_detect_event_observer); + observer.observe(document, { + attributes: true, + attributeFilter: ['style'], + subtree: true + }); + } + + document.querySelector('#jspsych-wg-cont').addEventListener('click', function () { + if(observer){ + observer.disconnect(); + } + end_trial(); + }); + } + + function is_face_detect_green(){ + if(document.querySelector("#webgazerFaceFeedbackBox")){ + return document.querySelector('#webgazerFaceFeedbackBox').style.borderColor == "green" + } else { + return false; + } + } + + function face_detect_event_observer(mutationsList, observer) { + if (mutationsList[0].target == document.querySelector('#webgazerFaceFeedbackBox')) { + if (mutationsList[0].type == 'attributes' && mutationsList[0].target.style.borderColor == "green") { + document.querySelector('#jspsych-wg-cont').disabled = false; + } + if (mutationsList[0].type == 'attributes' && mutationsList[0].target.style.borderColor == "red") { + document.querySelector('#jspsych-wg-cont').disabled = true; + } + } + } + + // function to end trial when it is time + function end_trial() { + + jsPsych.extensions['webgazer'].pause(); + jsPsych.extensions['webgazer'].hideVideo(); + + + // kill any remaining setTimeout handlers + jsPsych.pluginAPI.clearAllTimeouts(); + + // gather the data to store for the trial + var trial_data = { + load_time: load_time + }; + + // clear the display + display_element.innerHTML = ''; + + document.querySelector('#webgazer-center-style').remove(); + + // move on to the next trial + jsPsych.finishTrial(trial_data); + }; + + }; + + return plugin; +})(); \ No newline at end of file diff --git a/plugins/jspsych-webgazer-validate.js b/plugins/jspsych-webgazer-validate.js new file mode 100644 index 0000000000..e25224a2bd --- /dev/null +++ b/plugins/jspsych-webgazer-validate.js @@ -0,0 +1,316 @@ +/** + * jspsych-webgazer-validate + * Josh de Leeuw + **/ + + jsPsych.plugins["webgazer-validate"] = (function() { + + var plugin = {}; + + plugin.info = { + name: 'webgazer-validate', + description: '', + parameters: { + validation_points: { + type: jsPsych.plugins.parameterType.INT, + default: [[10,10], [10,50], [10,90], [50,10], [50,50], [50,90], [90,10], [90,50], [90,90]] + }, + validation_point_coordinates: { + type: jsPsych.plugins.parameterType.STRING, + default: 'percent' // options: 'percent', 'center-offset-pixels' + }, + roi_radius: { + type: jsPsych.plugins.parameterType.INT, + default: 200 + }, + randomize_validation_order: { + type: jsPsych.plugins.parameterType.BOOL, + default: false + }, + time_to_saccade: { + type: jsPsych.plugins.parameterType.INT, + default: 1000 + }, + validation_duration: { + type: jsPsych.plugins.parameterType.INT, + default: 2000 + }, + point_size:{ + type: jsPsych.plugins.parameterType.INT, + default: 20 + }, + show_validation_data: { + type: jsPsych.plugins.parameterType.BOOL, + default: false + } + } + } + + plugin.trial = function(display_element, trial) { + + var trial_data = {} + trial_data.raw_gaze = []; + trial_data.percent_in_roi = []; + trial_data.average_offset = []; + trial_data.validation_points = null; + + var html = ` +
+
` + + display_element.innerHTML = html; + + var wg_container = display_element.querySelector('#webgazer-validate-container'); + + var points_completed = -1; + var val_points = null; + var start = performance.now(); + + validate(); + + function validate(){ + + if(trial.randomize_validation_order){ + val_points = jsPsych.randomization.shuffle(trial.validation_points); + } else { + val_points = trial.validation_points; + } + trial_data.validation_points = val_points; + points_completed = -1; + //jsPsych.extensions['webgazer'].resume(); + jsPsych.extensions.webgazer.startSampleInterval(); + //jsPsych.extensions.webgazer.showPredictions(); + next_validation_point(); + } + + function next_validation_point(){ + points_completed++; + if(points_completed == val_points.length){ + validation_done(); + } else { + var pt = val_points[points_completed]; + validation_display(pt); + } + } + + function validation_display(pt){ + var pt_html = drawValidationPoint(pt[0], pt[1]); + wg_container.innerHTML = pt_html; + + var pt_dom = wg_container.querySelector('.validation-point'); + + var br = pt_dom.getBoundingClientRect(); + var x = br.left + br.width / 2; + var y = br.top + br.height / 2; + + var pt_start_val = performance.now() + trial.time_to_saccade; + var pt_finish = pt_start_val + trial.validation_duration; + + var pt_data = []; + + var cancelGazeUpdate = jsPsych.extensions['webgazer'].onGazeUpdate(function(prediction){ + if(performance.now() > pt_start_val){ + pt_data.push({x: prediction.x, y: prediction.y, dx: prediction.x - x, dy: prediction.y - y, t: Math.round(prediction.t-start)}); + } + }); + + requestAnimationFrame(function watch_dot(){ + if(performance.now() < pt_finish){ + requestAnimationFrame(watch_dot); + } else { + trial_data.raw_gaze.push(pt_data); + cancelGazeUpdate(); + + next_validation_point(); + } + }); + + } + + function drawValidationPoint(x,y){ + if(trial.validation_point_coordinates == 'percent'){ + return drawValidationPoint_PercentMode(x,y); + } + if(trial.validation_point_coordinates == 'center-offset-pixels'){ + return drawValidationPoint_CenterOffsetMode(x,y); + } + } + + function drawValidationPoint_PercentMode(x,y){ + return `
` + } + + function drawValidationPoint_CenterOffsetMode(x,y){ + return `
` + } + + function drawCircle(target_x, target_y, dx, dy, r){ + if(trial.validation_point_coordinates == 'percent'){ + return drawCircle_PercentMode(target_x, target_y, dx, dy, r); + } + if(trial.validation_point_coordinates == 'center-offset-pixels'){ + return drawCircle_CenterOffsetMode(target_x, target_y, dx, dy, r); + } + } + + function drawCircle_PercentMode(target_x, target_y, dx, dy, r){ + var html = ` +
+ ` + return html; + } + + function drawCircle_CenterOffsetMode(target_x, target_y, dx, dy, r){ + var html = ` +
+ ` + return html; + } + + function drawRawDataPoint(target_x, target_y, dx, dy, ){ + if(trial.validation_point_coordinates == 'percent'){ + return drawRawDataPoint_PercentMode(target_x, target_y, dx, dy); + } + if(trial.validation_point_coordinates == 'center-offset-pixels'){ + return drawRawDataPoint_CenterOffsetMode(target_x, target_y, dx, dy); + } + } + + function drawRawDataPoint_PercentMode(target_x, target_y, dx, dy){ + var color = Math.sqrt(dx*dx + dy*dy) <= trial.roi_radius ? '#afa' : '#faa'; + return `
` + } + + function drawRawDataPoint_CenterOffsetMode(target_x, target_y, dx, dy){ + var color = Math.sqrt(dx*dx + dy*dy) <= trial.roi_radius ? '#afa' : '#faa'; + return `
` + } + + function median(arr){ + var mid = Math.floor(arr.length/2); + var sorted_arr = arr.sort((a,b) => a-b); + if(arr.length % 2 == 0){ + return sorted_arr[mid-1] + sorted_arr[mid] / 2; + } else { + return sorted_arr[mid]; + } + } + + function calculateGazeCentroid(gazeData){ + + var x_diff_m = gazeData.reduce(function(accumulator, currentValue, index){ + accumulator += currentValue.dx; + if(index == gazeData.length-1){ + return accumulator / gazeData.length; + } else { + return accumulator; + } + }, 0); + + var y_diff_m = gazeData.reduce(function(accumulator, currentValue, index){ + accumulator += currentValue.dy; + if(index == gazeData.length-1){ + return accumulator / gazeData.length; + } else { + return accumulator; + } + }, 0); + + var median_distance = median(gazeData.map(function(x){ return(Math.sqrt(Math.pow(x.dx-x_diff_m,2) + Math.pow(x.dy-y_diff_m,2)))})); + + return { + x: x_diff_m, + y: y_diff_m, + r: median_distance + } + } + + function calculatePercentInROI(gazeData){ + var distances = gazeData.map(function(p){ + return(Math.sqrt(Math.pow(p.dx,2) + Math.pow(p.dy,2))) + }); + var sum_in_roi = distances.reduce(function(accumulator, currentValue){ + if(currentValue <= trial.roi_radius){ + accumulator++; + } + return accumulator; + }, 0); + var percent = sum_in_roi / gazeData.length * 100; + return percent; + } + + function calculateSampleRate(gazeData){ + var mean_diff = []; + if(gazeData.length == 0){ + return 0; + } + for(var i=0; i 1){ + var t_diff = []; + for(var j=1; j 0){ + return 1000 / (mean_diff.reduce(function(a,b) { return(a+b) }, 0) / mean_diff.length); + } else { + return null; + } + } + + function validation_done(){ + trial_data.samples_per_sec = calculateSampleRate(trial_data.raw_gaze).toFixed(2); + for(var i=0; iClimbing

', '

Walking

'], + answer: 'different', + gap_duration: 0, + first_stim_duration: null + } + + var timeline = [trial]; + + jsPsych.init({timeline: timeline}); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('

Climbing

'); + utils.pressKey('q'); + jest.runAllTimers(); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('

Walking

'); + utils.pressKey('q'); + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + + var csv_data = jsPsych.data.get().ignore(['rt','internal_node_id','time_elapsed','trial_type','rt_stim1','response_stim1']).csv(); + expect(csv_data).toBe('"answer","correct","stimulus","response","trial_index"\r\n"different","false","[""

Climbing

"",""

Walking

""]","q","0"\r\n') + }) + + test('survey-multi-select response array is correctly converted', function(){ + require(root + 'plugins/jspsych-survey-multi-select.js'); + + var trial = { + type: 'survey-multi-select', + questions: [ + {prompt: "foo", options: ["fuzz", "bizz", "bar"], name: 'q'} + ] + }; + + var timeline = [trial]; + + jsPsych.init({timeline: timeline}); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + utils.clickTarget(document.querySelector('#jspsych-survey-multi-select-response-0-0')); + utils.clickTarget(document.querySelector('#jspsych-survey-multi-select-response-0-1')); + utils.clickTarget(document.querySelector('#jspsych-survey-multi-select-next')); + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + + var csv_data = jsPsych.data.get().ignore(['rt','internal_node_id','time_elapsed','trial_type','question_order']).csv(); + expect(csv_data).toBe('"response","trial_index"\r\n"{""q"":[""fuzz"",""bizz""]}","0"\r\n') + }) + +}); \ No newline at end of file diff --git a/tests/jsPsych.data/data-json-conversion.test.js b/tests/jsPsych.data/data-json-conversion.test.js new file mode 100644 index 0000000000..845862c0b8 --- /dev/null +++ b/tests/jsPsych.data/data-json-conversion.test.js @@ -0,0 +1,120 @@ +const root = '../../'; +const utils = require('../testing-utils.js'); + +jest.useFakeTimers(); + +describe('data conversion to json', function(){ + + beforeEach(function(){ + require(root + 'jspsych.js'); + }); + + test('survey-text data response object is correctly converted', function(){ + require(root + 'plugins/jspsych-survey-text.js'); + + var trial = { + type: 'survey-text', + questions: [ + {prompt: 'Q1'}, + {prompt: 'Q2'} + ] + } + + var timeline = [trial]; + + jsPsych.init({timeline}); + + document.querySelector('#input-0').value = 'Response 1'; + document.querySelector('#input-1').value = 'Response 2'; + + utils.clickTarget(document.querySelector('#jspsych-survey-text-next')); + + var json_data = jsPsych.data.get().ignore(['rt','internal_node_id', 'time_elapsed', 'trial_type']).json(); + expect(json_data).toBe(JSON.stringify([{response: {Q0: "Response 1", Q1: "Response 2"}, trial_index: 0}])); + }) + + test('same-different-html stimulus array is correctly converted', function(){ + require(root + 'plugins/jspsych-same-different-html.js'); + + var trial = { + type: 'same-different-html', + stimuli: ['

Climbing

', '

Walking

'], + answer: 'different', + gap_duration: 0, + first_stim_duration: null + } + + var timeline = [trial]; + + jsPsych.init({timeline: timeline}); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('

Climbing

'); + utils.pressKey('q'); + jest.runAllTimers(); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('

Walking

'); + utils.pressKey('q'); + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + + var json_data = jsPsych.data.get().ignore(['rt','internal_node_id','time_elapsed','trial_type','rt_stim1','response_stim1']).json(); + expect(json_data).toBe(JSON.stringify([{answer: 'different', correct: false, stimulus: ['

Climbing

','

Walking

'], response: 'q', trial_index: 0}])); + }) + + test('survey-multi-select response array is correctly converted', function(){ + require(root + 'plugins/jspsych-survey-multi-select.js'); + + var trial = { + type: 'survey-multi-select', + questions: [ + {prompt: "foo", options: ["fuzz", "bizz", "bar"], name: 'q'} + ] + }; + + var timeline = [trial]; + + jsPsych.init({timeline: timeline}); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + utils.clickTarget(document.querySelector('#jspsych-survey-multi-select-response-0-0')); + utils.clickTarget(document.querySelector('#jspsych-survey-multi-select-response-0-1')); + utils.clickTarget(document.querySelector('#jspsych-survey-multi-select-next')); + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + + var json_data = jsPsych.data.get().ignore(['rt','internal_node_id','time_elapsed','trial_type','question_order']).json(); + var data_js = [ + { + response: { + q: ["fuzz","bizz"], + }, + trial_index: 0 + } + ]; + expect(json_data).toBe(JSON.stringify(data_js)); + }) + + test('instructions view_history is correctly converted - issue #670', function(){ + require(root + 'plugins/jspsych-instructions.js'); + + var trial = { + type: 'instructions', + pages: ['page 1','page 2'], + key_forward: 'a', + allow_keys: true + }; + + jsPsych.init({timeline: [trial]}); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('page 1'); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('page 2'); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + + var json_data = jsPsych.data.get().ignore(['rt','internal_node_id','time_elapsed']).json(); + var js_data = JSON.parse(json_data); + expect(Array.isArray(js_data[0].view_history)).toBe(true); + expect(js_data[0].view_history.length).toBe(2); + expect(js_data[0].view_history[0].page_index).toBe(0); + expect(js_data[0].view_history[1].page_index).toBe(1); + }) + +}); \ No newline at end of file diff --git a/tests/jsPsych.data/datacollection.test.js b/tests/jsPsych.data/datacollection.test.js index cdfdb631a8..ef5155f9a1 100644 --- a/tests/jsPsych.data/datacollection.test.js +++ b/tests/jsPsych.data/datacollection.test.js @@ -63,14 +63,37 @@ describe('DataCollection', function(){ }); test('#values', function(){ expect(JSON.stringify(jsPsych.data.get().values())).toBe(JSON.stringify(data)); + expect(jsPsych.data.get().values()).toBe(data); }); test('#first', function(){ expect(jsPsych.data.get().first(3).count()).toBe(3); expect(jsPsych.data.get().first(2).values()[1].rt).toBe(200); + expect(jsPsych.data.get().first().count()).toBe(1); + expect(() => { + jsPsych.data.get().first(-1) + }).toThrow(); + expect(() => { + jsPsych.data.get().first(0) + }).toThrow(); + expect(jsPsych.data.get().filter({foo: "bar"}).first(1).count()).toBe(0); + var n = jsPsych.data.get().count(); + var too_many = n+1; + expect(jsPsych.data.get().first(too_many).count()).toBe(n); }); test('#last', function(){ expect(jsPsych.data.get().last(2).count(2)).toBe(2); expect(jsPsych.data.get().last(2).values()[0].rt).toBe(400); + expect(jsPsych.data.get().last().count()).toBe(1); + expect(() => { + jsPsych.data.get().last(-1) + }).toThrow(); + expect(() => { + jsPsych.data.get().last(0) + }).toThrow(); + expect(jsPsych.data.get().filter({foo: "bar"}).last(1).count()).toBe(0); + var n = jsPsych.data.get().count(); + var too_many = n+1; + expect(jsPsych.data.get().last(too_many).count()).toBe(n); }); test('#join', function(){ var dc1 = jsPsych.data.get().filter({filter: true}); diff --git a/tests/jsPsych.data/datamodule.test.js b/tests/jsPsych.data/datamodule.test.js index bee1857687..c669a49845 100644 --- a/tests/jsPsych.data/datamodule.test.js +++ b/tests/jsPsych.data/datamodule.test.js @@ -11,7 +11,7 @@ describe('Basic data recording', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check if data contains rt expect(jsPsych.data.get().select('rt').count()).toBe(1); }) @@ -26,7 +26,7 @@ describe('#addProperties', function(){ jsPsych.data.addProperties({'testprop': 1}); jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check if data contains testprop expect(jsPsych.data.get().select('testprop').count()).toBe(1); }); @@ -38,7 +38,7 @@ describe('#addProperties', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check if data contains testprop expect(jsPsych.data.get().select('testprop').count()).toBe(0); jsPsych.data.addProperties({'testprop': 1}); @@ -61,7 +61,7 @@ describe('#addDataToLastTrial', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check data structure expect(jsPsych.data.get().select('testA').values[0]).toBe(1); expect(jsPsych.data.get().select('testB').values[0]).toBe(2); @@ -77,9 +77,9 @@ describe('#getLastTrialData', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // click through second trial - utils.pressKey(32); + utils.pressKey('a'); // check data structure expect(jsPsych.data.getLastTrialData().select('trial_index').values[0]).toBe(1); }); @@ -107,7 +107,7 @@ describe('#getLastTimelineData', function(){ jsPsych.init({timeline:timeline}); // click through all four trials for(var i=0; i<4; i++){ - utils.pressKey(32); + utils.pressKey('a'); } // check data structure expect(jsPsych.data.getLastTimelineData().count()).toBe(2); @@ -123,7 +123,7 @@ describe('#displayData', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // overwrite data with custom data var data = [{col1: 1, col2: 2}, {col1: 3, col2: 4}] jsPsych.data._customInsert(data); @@ -139,7 +139,7 @@ describe('#displayData', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // overwrite data with custom data var data = [{col1: 1, col2: 2}, {col1: 3, col2: 4}] jsPsych.data._customInsert(data); diff --git a/tests/jsPsych.data/dataparameter.test.js b/tests/jsPsych.data/dataparameter.test.js index 497f547024..8c0a1d0975 100644 --- a/tests/jsPsych.data/dataparameter.test.js +++ b/tests/jsPsych.data/dataparameter.test.js @@ -25,7 +25,7 @@ describe('The data parameter', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(data) { expect(data).toBe(true) }); @@ -57,9 +57,9 @@ describe('The data parameter', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(data) { expect(data).toBe(2) }); @@ -96,9 +96,9 @@ describe('The data parameter', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(data) { expect(data).toBe(2) }); @@ -122,8 +122,8 @@ describe('The data parameter', function(){ timeline: [trial] }); - utils.pressKey(32); // trial 1 - utils.pressKey(32); // trial 2 + utils.pressKey('a'); // trial 1 + utils.pressKey('a'); // trial 2 expect(jsPsych.data.get().filter({added: true}).count()).toBe(1); expect(jsPsych.data.get().filter({added: false}).count()).toBe(1); @@ -147,8 +147,8 @@ describe('The data parameter', function(){ timeline: [trial] }); - utils.pressKey(32); // trial 1 - utils.pressKey(32); // trial 2 + utils.pressKey('a'); // trial 1 + utils.pressKey('a'); // trial 2 expect(jsPsych.data.get().filter({added: true}).count()).toBe(1); expect(jsPsych.data.get().filter({added: false}).count()).toBe(1); @@ -178,8 +178,8 @@ describe('The data parameter', function(){ timeline: [trial] }); - utils.pressKey(32); // trial 1 - utils.pressKey(32); // trial 2 + utils.pressKey('a'); // trial 1 + utils.pressKey('a'); // trial 2 expect(jsPsych.data.get().filter({added_copy: true}).count()).toBe(1); expect(jsPsych.data.get().filter({added_copy: false}).count()).toBe(1); @@ -219,9 +219,9 @@ describe('The data parameter', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(data) { expect(data).toBe(2) }); @@ -246,6 +246,6 @@ describe('The data parameter', function(){ } }) - utils.pressKey(32); + utils.pressKey('a'); }) }); diff --git a/tests/jsPsych.data/interactions.test.js b/tests/jsPsych.data/interactions.test.js index a06eae8574..c96878d4b4 100644 --- a/tests/jsPsych.data/interactions.test.js +++ b/tests/jsPsych.data/interactions.test.js @@ -15,7 +15,7 @@ describe('Data recording', function(){ jsPsych.init({timeline:timeline}); window.dispatchEvent(new Event('focus')); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check data expect(jsPsych.data.getInteractionData().filter({event: 'focus'}).count()).toBe(1); }) @@ -27,7 +27,7 @@ describe('Data recording', function(){ jsPsych.init({timeline:timeline}); window.dispatchEvent(new Event('blur')); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check data expect(jsPsych.data.getInteractionData().filter({event: 'blur'}).count()).toBe(1); }) @@ -40,7 +40,7 @@ describe('Data recording', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check if data contains rt }); @@ -50,7 +50,7 @@ describe('Data recording', function(){ ]; jsPsych.init({timeline:timeline}); // click through first trial - utils.pressKey(32); + utils.pressKey('a'); // check if data contains rt }); @@ -78,7 +78,7 @@ describe('on_interaction_data_update', function(){ expect(updatefn.mock.calls.length).toBeGreaterThanOrEqual(1); // >= because of jsdom window not isolated to this test. // click through first trial - utils.pressKey(32); + utils.pressKey('a'); }); test('fires for focus', function(){ @@ -94,7 +94,7 @@ describe('on_interaction_data_update', function(){ window.dispatchEvent(new Event('focus')); expect(updatefn.mock.calls.length).toBeGreaterThanOrEqual(1); // >= because of jsdom window not isolated to this test. // click through first trial - utils.pressKey(32); + utils.pressKey('a'); }) /* not sure yet how to test fullscreen events with jsdom engine */ diff --git a/tests/jsPsych.data/trialparameters.test.js b/tests/jsPsych.data/trialparameters.test.js new file mode 100644 index 0000000000..0fced0a1f1 --- /dev/null +++ b/tests/jsPsych.data/trialparameters.test.js @@ -0,0 +1,175 @@ +const root = '../../'; +const utils = require('../testing-utils.js'); + +beforeEach(function(){ + require(root + 'jspsych.js'); + require(root + 'plugins/jspsych-html-keyboard-response.js'); +}); + +describe('Trial parameters in the data', function(){ + test('Can be added by specifying the parameter with a value of true in save_trial_parameters', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + save_trial_parameters: { + choices: true, + trial_duration: true + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey(' '); + + var data = jsPsych.data.get().values()[0]; + expect(data.choices).not.toBeUndefined(); + expect(data.trial_duration).not.toBeUndefined(); + }); + + test('Can be removed by specifying the parameter with a value of false in save_trial_parameters', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + save_trial_parameters: { + stimulus: false + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey(' '); + + var data = jsPsych.data.get().values()[0]; + expect(data.stimulus).toBeUndefined(); + }); + + test('For compatibility with data access functions, internal_node_id and trial_index cannot be removed', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + save_trial_parameters: { + internal_node_id: false, + trial_index: false + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey(' '); + + var data = jsPsych.data.get().values()[0]; + expect(data.internal_node_id).not.toBeUndefined(); + expect(data.trial_index).not.toBeUndefined(); + }) + + test('Invalid parameter names throw a warning in the console', function(){ + + const spy = jest.spyOn(console, 'warn').mockImplementation(); + + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + save_trial_parameters: { + foo: true, + bar: false + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey(' '); + + expect(spy).toHaveBeenCalled(); + spy.mockRestore(); + }); + + test('Arrayed objects work with save_trial_parameters ', function(){ + + require(root + 'plugins/jspsych-survey-text.js'); + + var q = [ + {prompt: 'foo'}, + {prompt: 'bar'} + ] + var trial = { + type: 'survey-text', + questions: q, + save_trial_parameters: { + questions: true + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.clickTarget(document.querySelector('#jspsych-survey-text-next')); + + var data = jsPsych.data.get().values()[0]; + expect(data.questions[0].prompt).toBe(q[0].prompt); + expect(data.questions[1].prompt).toBe(q[1].prompt); + }); + + test('Function-based parameters are stored as string representations ', function(){ + + require(root + 'plugins/jspsych-reconstruction.js'); + + var sample_function = function(param){ + var size = 50 + Math.floor(param*250); + var html = '
'+ + '
'; + return html; + } + + var trial = { + type: 'reconstruction', + stim_function: sample_function, + starting_value: 0.25, + save_trial_parameters: { + stim_function: true + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.clickTarget(document.querySelector('button')); + + var data = jsPsych.data.get().values()[0]; + expect(data.stim_function).toBe(sample_function.toString()); + }); + + test('Dynamic parameters record their evaluated value', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + trial_duration: function() { return 1000; }, + save_trial_parameters: { + trial_duration: true + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey(' '); + + var data = jsPsych.data.get().values()[0]; + expect(data.trial_duration).toBe(1000); + }); +}) \ No newline at end of file diff --git a/tests/jsPsych.extensions/extensions.test.js b/tests/jsPsych.extensions/extensions.test.js new file mode 100644 index 0000000000..d12c434e7b --- /dev/null +++ b/tests/jsPsych.extensions/extensions.test.js @@ -0,0 +1,207 @@ +const utils = require('../testing-utils.js'); +const root = '../../'; + +jest.useFakeTimers(); + +describe('jsPsych.extensions', function () { + + beforeEach(function () { + require(root + 'jspsych.js'); + require(root + 'plugins/jspsych-html-keyboard-response.js'); + require('./test-extension.js'); + }); + + test('initialize is called at start of experiment', function () { + + var initFunc = jest.spyOn(jsPsych.extensions.test, 'initialize'); + + var timeline = [{type: 'html-keyboard-response', stimulus: 'foo'}]; + + jsPsych.init({ + timeline, + extensions: [{type: 'test'}] + }); + + expect(initFunc).toHaveBeenCalled(); + }); + + test('initialize gets params', function(){ + var initFunc = jest.spyOn(jsPsych.extensions.test, 'initialize'); + + var timeline = [{type: 'html-keyboard-response', stimulus: 'foo'}]; + + jsPsych.init({ + timeline, + extensions: [{type: 'test', params: {foo: 1}}] + }); + + expect(initFunc).toHaveBeenCalledWith({foo: 1}); + }); + + test('on_start is called before trial', function(){ + var onStartFunc = jest.spyOn(jsPsych.extensions.test, 'on_start'); + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test'} + ], + on_load: function(){ + expect(onStartFunc).toHaveBeenCalled(); + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey('a'); + }); + + test('on_start gets params', function(){ + var onStartFunc = jest.spyOn(jsPsych.extensions.test, 'on_start'); + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test', params: {foo: 1}} + ], + on_load: function(){ + expect(onStartFunc).toHaveBeenCalledWith({foo: 1}); + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey('a'); + }); + + test('on_load is called after load', function(){ + var onLoadFunc = jest.spyOn(jsPsych.extensions.test, 'on_load'); + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test'} + ], + on_load: function(){ + // trial load happens before extension load + expect(onLoadFunc).not.toHaveBeenCalled(); + } + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(onLoadFunc).toHaveBeenCalled(); + + utils.pressKey('a'); + }); + + test('on_load gets params', function(){ + var onLoadFunc = jest.spyOn(jsPsych.extensions.test, 'on_load'); + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test', params: {foo:1}} + ] + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(onLoadFunc).toHaveBeenCalledWith({foo:1}); + + utils.pressKey('a'); + }); + + test('on_finish called after trial', function(){ + var onFinishFunc = jest.spyOn(jsPsych.extensions.test, 'on_finish'); + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test', params: {foo:1}} + ] + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(onFinishFunc).not.toHaveBeenCalled(); + + utils.pressKey('a'); + + expect(onFinishFunc).toHaveBeenCalled(); + }); + + test('on_finish gets params', function(){ + var onFinishFunc = jest.spyOn(jsPsych.extensions.test, 'on_finish'); + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test', params: {foo:1}} + ] + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey('a'); + + expect(onFinishFunc).toHaveBeenCalledWith({foo:1}); + }); + + test('on_finish adds trial data', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test', params: {foo:1}} + ] + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey('a'); + + expect(jsPsych.data.get().values()[0].extension_data).toBe(true); + }); + + test('on_finish data is available in trial on_finish', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + extensions: [ + {type: 'test', params: {foo:1}} + ], + on_finish: function(data){ + expect(data.extension_data).toBe(true); + } + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey('a'); + }); +}); \ No newline at end of file diff --git a/tests/jsPsych.extensions/test-extension.js b/tests/jsPsych.extensions/test-extension.js new file mode 100644 index 0000000000..b45a4c699b --- /dev/null +++ b/tests/jsPsych.extensions/test-extension.js @@ -0,0 +1,42 @@ +jsPsych.extensions['test'] = (function () { + + var extension = {}; + + // private state for the extension + // extension authors can define public functions to interact + // with the state. recommend not exposing state directly + // so that state manipulations are checked. + var state = {}; + + // required, will be called at jsPsych.init + // should return a Promise + extension.initialize = function (params) { + return new Promise(function(resolve, reject){ + resolve(); + }); + } + + // required, will be called when the trial starts (before trial loads) + extension.on_start = function (params) { + + } + + // required will be called when the trial loads + extension.on_load = function (params) { + + } + + // required, will be called when jsPsych.finishTrial() is called + // must return data object to be merged into data. + extension.on_finish = function (params) { + // send back data + return { + extension_data: true + } + } + + return extension; + + })(); + + \ No newline at end of file diff --git a/tests/jsPsych.pluginAPI/pluginapi.test.js b/tests/jsPsych.pluginAPI/pluginapi.test.js index 957fbfb323..289acf9fd7 100644 --- a/tests/jsPsych.pluginAPI/pluginapi.test.js +++ b/tests/jsPsych.pluginAPI/pluginapi.test.js @@ -1,80 +1,240 @@ const root = '../../'; -require(root + 'jspsych.js'); -require(root + 'plugins/jspsych-html-keyboard-response.js'); +beforeEach(function(){ + require(root + 'jspsych.js'); + require(root + 'plugins/jspsych-html-keyboard-response.js'); +}); describe('#getKeyboardResponse', function(){ - beforeEach(function(){ + test('should execute a function after successful keypress', function(){ + var callback = jest.fn(); var t = { type: 'html-keyboard-response', stimulus: 'foo', - choices: ['Q'] - } - + choices: ['q'] + }; jsPsych.init({ timeline: [t] }) - }); - test('should execute a function after successful keypress', function(){ - var callback = jest.fn(); jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback}); expect(callback.mock.calls.length).toBe(0); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); expect(callback.mock.calls.length).toBe(1); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); expect(callback.mock.calls.length).toBe(1); }); test('should execute only valid keys', function(){ var callback = jest.fn(); - jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: [13]}); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t] + }) + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a']}); expect(callback.mock.calls.length).toBe(0); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 54})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 54})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'b'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'b'})); expect(callback.mock.calls.length).toBe(0); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 13})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 13})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); expect(callback.mock.calls.length).toBe(1); }); test('should not respond when jsPsych.NO_KEYS is used', function(){ var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t] + }) jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: jsPsych.NO_KEYS}); expect(callback.mock.calls.length).toBe(0); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 54})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 54})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); expect(callback.mock.calls.length).toBe(0); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); expect(callback.mock.calls.length).toBe(0); }); test('should not respond to held keys when allow_held_key is false', function(){ var callback = jest.fn(); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t] + }) + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: jsPsych.ALL_KEYS, allow_held_key: false}); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); expect(callback.mock.calls.length).toBe(0); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); expect(callback.mock.calls.length).toBe(1); }); test('should respond to held keys when allow_held_key is true', function(){ var callback = jest.fn(); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t] + }) + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: jsPsych.ALL_KEYS, allow_held_key: true}); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); + expect(callback.mock.calls.length).toBe(1); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); + }); + test('should convert response key to lowercase before determining validity, when case_sensitive_responses is false', function(){ + var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: false + }) + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a']}); + expect(callback.mock.calls.length).toBe(0); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); expect(callback.mock.calls.length).toBe(1); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); + }); + test('should not convert response key to lowercase before determining validity, when case_sensitive_responses is true', function(){ + var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: true + }) + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a']}); + expect(callback.mock.calls.length).toBe(0); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); + expect(callback.mock.calls.length).toBe(0); + }); + test('should not respond to held key when response/valid key case differs, case_sensitive_responses is false, and allow held key is false', function(){ + var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: false + }) + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a'], allow_held_key: false}); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + expect(callback.mock.calls.length).toBe(0); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); + expect(callback.mock.calls.length).toBe(1); + }); + test('should respond to held keys when response/valid case differs, case_sensitive_responses is false, and allow_held_key is true', function(){ + var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: false + }) + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a'], allow_held_key: true}); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + expect(callback.mock.calls.length).toBe(1); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); + }); + test('should not respond to a held key when response/valid case differs, case_sensitive_responses is true, and allow_held_key is true', function(){ + var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: true + }) + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a'], allow_held_key: true}); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + expect(callback.mock.calls.length).toBe(0); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); + }); + test('should not respond to a held key when response/valid case differs, case_sensitive_responses is true, and allow_held_key is false', function(){ + var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: true + }) + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a'], allow_held_key: false}); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + expect(callback.mock.calls.length).toBe(0); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); + }); + test('should default to case insensitive when used before jsPsych.init is called', function(){ + expect(typeof jsPsych.initSettings().case_sensitive_responses).toBe("undefined"); + var callback = jest.fn(); + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a']}); + jsPsych.pluginAPI.createKeyboardEventListeners(document.body); + expect(callback.mock.calls.length).toBe(0); + document.body.dispatchEvent(new KeyboardEvent('keydown', {key: 'a'})); + document.body.dispatchEvent(new KeyboardEvent('keyup', {key: 'a'})); + expect(callback.mock.calls.length).toBe(1); + jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback, valid_responses: ['a']}); + jsPsych.pluginAPI.createKeyboardEventListeners(document.body); + document.body.dispatchEvent(new KeyboardEvent('keydown', {key: 'A'})); + document.body.dispatchEvent(new KeyboardEvent('keyup', {key: 'A'})); + expect(callback.mock.calls.length).toBe(2); + jsPsych.pluginAPI.reset(document.body); }); }) describe('#cancelKeyboardResponse', function(){ test('should cancel a keyboard response listener', function(){ var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t] + }) var listener = jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback}); expect(callback.mock.calls.length).toBe(0); jsPsych.pluginAPI.cancelKeyboardResponse(listener); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'q'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'q'})); expect(callback.mock.calls.length).toBe(0); }); }); @@ -82,17 +242,25 @@ describe('#cancelKeyboardResponse', function(){ describe('#cancelAllKeyboardResponses', function(){ test('should cancel all keyboard response listeners', function(){ var callback = jest.fn(); + var t = { + type: 'html-keyboard-response', + stimulus: 'foo', + choices: ['q'] + }; + jsPsych.init({ + timeline: [t] + }) jsPsych.pluginAPI.getKeyboardResponse({callback_function: callback}); expect(callback.mock.calls.length).toBe(0); jsPsych.pluginAPI.cancelAllKeyboardResponses(); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: 'q'})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: 'q'})); expect(callback.mock.calls.length).toBe(0); }); }); describe('#compareKeys', function(){ - test('should compare keys regardless of type', function(){ + test('should compare keys regardless of type (old key-keyCode functionality)', function(){ expect(jsPsych.pluginAPI.compareKeys('q', 81)).toBe(true); expect(jsPsych.pluginAPI.compareKeys(81, 81)).toBe(true); expect(jsPsych.pluginAPI.compareKeys('q', 'Q')).toBe(true); @@ -100,6 +268,59 @@ describe('#compareKeys', function(){ expect(jsPsych.pluginAPI.compareKeys('q','1')).toBe(false); expect(jsPsych.pluginAPI.compareKeys('q',80)).toBe(false); }); + test('should be case sensitive when case_sensitive_responses is true', function(){ + var t = { + type: 'html-keyboard-response', + stimulus: 'foo' + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: true + }) + expect(jsPsych.pluginAPI.compareKeys('q', 'Q')).toBe(false); + expect(jsPsych.pluginAPI.compareKeys('q', 'q')).toBe(true); + }); + test('should not be case sensitive when case_sensitive_responses is false', function(){ + var t = { + type: 'html-keyboard-response', + stimulus: 'foo' + }; + jsPsych.init({ + timeline: [t], + case_sensitive_responses: false + }) + expect(jsPsych.pluginAPI.compareKeys('q', 'Q')).toBe(true); + expect(jsPsych.pluginAPI.compareKeys('q', 'q')).toBe(true); + }); + test('should default to case insensitive for strings when used before jsPsych.init is called', function(){ + expect(typeof jsPsych.initSettings().case_sensitive_responses).toBe("undefined"); + expect(jsPsych.pluginAPI.compareKeys('q', 'Q')).toBe(true); + expect(jsPsych.pluginAPI.compareKeys('q', 'q')).toBe(true); + }); + test('should accept null as argument, and return true if both arguments are null, and return false if one argument is null and other is string or numeric', function() { + const spy = jest.spyOn(console, 'error').mockImplementation(); + expect(jsPsych.pluginAPI.compareKeys(null, 'Q')).toBe(false); + expect(jsPsych.pluginAPI.compareKeys(80, null)).toBe(false); + expect(jsPsych.pluginAPI.compareKeys(null, null)).toBe(true); + expect(console.error).not.toHaveBeenCalled(); + spy.mockRestore(); + }); + test('should return undefined and produce a console warning if either/both arguments are not a string, integer, or null', function() { + const spy = jest.spyOn(console, 'error').mockImplementation(); + var t1 = jsPsych.pluginAPI.compareKeys({}, 'Q'); + var t2 = jsPsych.pluginAPI.compareKeys(true, null); + var t3 = jsPsych.pluginAPI.compareKeys(null, ['Q']); + expect(typeof t1).toBe('undefined'); + expect(typeof t2).toBe('undefined'); + expect(typeof t3).toBe('undefined'); + expect(console.error).toHaveBeenCalledTimes(3); + expect(console.error.mock.calls).toEqual([ + ['Error in jsPsych.pluginAPI.compareKeys: arguments must be numeric key codes, key strings, or null.'], + ['Error in jsPsych.pluginAPI.compareKeys: arguments must be numeric key codes, key strings, or null.'], + ['Error in jsPsych.pluginAPI.compareKeys: arguments must be numeric key codes, key strings, or null.'], + ]); + spy.mockRestore(); + }); }); describe('#convertKeyCharacterToKeyCode', function(){ diff --git a/tests/jsPsych.pluginAPI/preloads.test.js b/tests/jsPsych.pluginAPI/preloads.test.js new file mode 100644 index 0000000000..2ad211e713 --- /dev/null +++ b/tests/jsPsych.pluginAPI/preloads.test.js @@ -0,0 +1,43 @@ +const root = '../../'; +const utils = require('../testing-utils.js'); + +beforeEach(function(){ + require(root + 'jspsych.js'); + require(root + 'plugins/jspsych-html-keyboard-response.js'); +}); + +describe('getAutoPreloadList', function(){ + test('gets whole timeline when no argument provided', function(){ + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + var t = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var timeline = [t]; + + jsPsych.init({ + timeline: timeline + }) + + var images = jsPsych.pluginAPI.getAutoPreloadList().images; + + expect(images[0]).toBe('img/foo.png'); + }) + test('works with images', function(){ + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + var t = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png' + } + + var timeline = [t]; + + var images = jsPsych.pluginAPI.getAutoPreloadList(timeline).images; + + expect(images[0]).toBe('img/foo.png'); + }) +}) \ No newline at end of file diff --git a/tests/jsPsych.randomization/randomziation.test.js b/tests/jsPsych.randomization/randomziation.test.js index ba2e1847fe..d063b291ef 100644 --- a/tests/jsPsych.randomization/randomziation.test.js +++ b/tests/jsPsych.randomization/randomziation.test.js @@ -2,6 +2,7 @@ const root = '../../'; require(root + 'jspsych.js'); +const originalRandomFunction = Math.random; describe('#shuffle', function(){ test('should produce fixed order with mock RNG', function(){ @@ -25,3 +26,20 @@ describe('#randomID', function(){ expect(jsPsych.randomization.randomID(3)).toBe("37a"); }); }); + +describe('shuffleNoRepeats', function(){ + test('should generate a random order with no repeats', function(){ + if(typeof Math.random.mock !== 'undefined'){ + Math.random = originalRandomFunction; + } + var equalityTest = function(a,b){ return a === b }; + var toShuffle = ['a','b','c','d']; + var repeated = jsPsych.randomization.repeat(toShuffle, 20); + var randomOrder = jsPsych.randomization.shuffleNoRepeats(repeated, equalityTest); + var repeats = 0; + for(var i=1; ifoo

', + css_classes: ['foo'] + } + + jsPsych.init({timeline:[trial]}); + + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(true); + utils.pressKey('a'); + }) + + test('Gracefully handles single class when not in array', function(){ + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + css_classes: 'foo' + } + + jsPsych.init({timeline:[trial]}); + + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(true); + utils.pressKey('a'); + }) + + test('Removes the added classes at the end of the trial', function(){ + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + css_classes: ['foo'] + } + + jsPsych.init({timeline:[trial]}); + + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(true); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(false); + + }) + + test('Class inherits in nested timelines', function(){ + var tm = { + timeline: [{ + type: 'html-keyboard-response', + stimulus: '

foo

', + }], + css_classes: ['foo'] + } + + jsPsych.init({timeline:[tm]}); + + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(true); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(false); + + }) + + test('Parameter works when defined as a function', function(){ + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + css_classes: function(){ + return ['foo'] + } + } + + jsPsych.init({timeline:[trial]}); + + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(true); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(false); + + }) + + test('Parameter works when defined as a timeline variable', function(){ + var trial = { + type: 'html-keyboard-response', + stimulus: '

foo

', + css_classes: jsPsych.timelineVariable('css') + } + + var t = { + timeline: [trial], + timeline_variables: [ + {css: ['foo']} + ] + } + + jsPsych.init({timeline:[t]}); + + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(true); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().classList.contains('foo')).toBe(false); + + }) +}) \ No newline at end of file diff --git a/tests/jsPsych/default-iti.test.js b/tests/jsPsych/default-iti.test.js index c836e9a67e..ac546e5451 100644 --- a/tests/jsPsych/default-iti.test.js +++ b/tests/jsPsych/default-iti.test.js @@ -23,9 +23,9 @@ describe('default iti parameter', function(){ jsPsych.init({timeline: [t,t2]}); expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); - utils.pressKey(32); + utils.pressKey('a'); }); test('creates a correct delay when set', function(){ @@ -42,10 +42,10 @@ describe('default iti parameter', function(){ jsPsych.init({timeline: [t,t2], default_iti: 100}); expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).not.toMatch('bar'); jest.advanceTimersByTime(100); expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); - utils.pressKey(32); + utils.pressKey('a'); }); }); \ No newline at end of file diff --git a/tests/jsPsych/default-parameters.test.js b/tests/jsPsych/default-parameters.test.js index 66b9a290a3..ddc05e4a8e 100644 --- a/tests/jsPsych/default-parameters.test.js +++ b/tests/jsPsych/default-parameters.test.js @@ -27,4 +27,32 @@ describe('nested defaults', function(){ expect(display.querySelector('input').placeholder).toBe("") expect(display.querySelector('input').size).toBe(40) }); + + test('safe against extending the array.prototype (issue #989)', function(){ + Array.prototype.qq = jest.fn(); + const spy = jest.spyOn(console, 'error').mockImplementation(); + + var t = { + type: 'survey-text', + questions: [ + { + prompt: 'Question 1.' + }, + { + prompt: 'Question 2.' + } + ] + } + + jsPsych.init({timeline: [t]}) + + var display = jsPsych.getDisplayElement(); + + expect(display.querySelector('input').placeholder).toBe("") + expect(display.querySelector('input').size).toBe(40) + + expect(spy).not.toHaveBeenCalled(); + + spy.mockRestore(); + }); }) \ No newline at end of file diff --git a/tests/jsPsych/endexperiment.test.js b/tests/jsPsych/endexperiment.test.js index 9b6dd16abe..d1d7d56356 100644 --- a/tests/jsPsych/endexperiment.test.js +++ b/tests/jsPsych/endexperiment.test.js @@ -24,7 +24,7 @@ test('works on basic timeline', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch('trial 1'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('the end'); }); @@ -43,7 +43,7 @@ test('works with looping timeline (#541)', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch('trial 1'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('the end'); }); \ No newline at end of file diff --git a/tests/jsPsych/events.test.js b/tests/jsPsych/events.test.js index a203068af1..7c094b3bc6 100644 --- a/tests/jsPsych/events.test.js +++ b/tests/jsPsych/events.test.js @@ -16,7 +16,7 @@ describe('on_finish (trial)', function(){ type: 'html-keyboard-response', stimulus: 'hello', on_finish: function(data){ - key_data = data.key_press; + key_data = data.response; } } @@ -27,9 +27,9 @@ describe('on_finish (trial)', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); - })).then(function(data) { expect(data.key_data).toBe(32) }); + })).then(function(data) { expect(data.key_data).toBe('a') }); }); test('should be able to write to the data', function(){ @@ -42,19 +42,19 @@ describe('on_finish (trial)', function(){ type: 'html-keyboard-response', stimulus: 'hello', on_finish: function(data){ - data.key_press = 1; + data.response = 1; } } jsPsych.init({ timeline: [trial], on_finish: function() { - promise_data.final_key_press = jsPsych.data.get().values()[0].key_press; + promise_data.final_key_press = jsPsych.data.get().values()[0].response; resolve(promise_data); } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(pd) { @@ -86,7 +86,7 @@ describe('on_start (trial)', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); })).then(function(data) { expect(data).toBe('hello') }); }); @@ -115,7 +115,7 @@ describe('on_start (trial)', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); })).then(function(data) { expect(data).toBe('hello') }); }); @@ -136,18 +136,18 @@ describe('on_trial_finish (experiment level)', function(){ jsPsych.init({ timeline: [trial], on_trial_finish: function(data){ - promise_data.key = data.key_press; + promise_data.key = data.response; }, on_finish: function(){ resolve(promise_data); } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(pd) { - expect(pd.key).toBe(32); + expect(pd.key).toBe('a'); }); }); @@ -173,7 +173,7 @@ describe('on_trial_finish (experiment level)', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(pd) { @@ -197,18 +197,18 @@ describe('on_data_update', function(){ jsPsych.init({ timeline: [trial], on_data_update: function(data){ - promise_data.key = data.key_press; + promise_data.key = data.response; }, on_finish: function(){ resolve(promise_data); } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(pd) { - expect(pd.key).toBe(32); + expect(pd.key).toBe('a'); }); }); @@ -244,8 +244,8 @@ describe('on_data_update', function(){ //resolve(); })).then(function(pd) { - expect(pd[0].key_press).not.toBeUndefined(); - expect(pd[0].key_press).toBeNull(); + expect(pd[0].response).not.toBeUndefined(); + expect(pd[0].response).toBeNull(); expect(pd[1].response).toBeNull(); expect(pd[1].rt).toBeNull(); }); @@ -275,7 +275,7 @@ describe('on_data_update', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(pd) { @@ -307,7 +307,7 @@ describe('on_data_update', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(pd) { @@ -339,7 +339,7 @@ describe('on_trial_start', function(){ } }); - utils.pressKey(32); + utils.pressKey('a'); //resolve(); })).then(function(pd) { @@ -364,6 +364,243 @@ describe('on_trial_start', function(){ var display_element = jsPsych.getDisplayElement(); expect(display_element.innerHTML).toMatch('goodbye'); - utils.pressKey(32); + utils.pressKey('a'); }); }); + +describe('on_timeline_finish', function(){ + test('should fire once when timeline is complete', function(){ + + var on_finish_fn = jest.fn(); + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + }, + { + type: 'html-keyboard-response', + stimulus: 'foo' + }, + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_finish: on_finish_fn + } + + jsPsych.init({timeline: [mini_timeline]}); + + utils.pressKey('a'); + expect(on_finish_fn).not.toHaveBeenCalled(); + utils.pressKey('a'); + expect(on_finish_fn).not.toHaveBeenCalled(); + utils.pressKey('a'); + expect(on_finish_fn).toHaveBeenCalled(); + }); + + test('should fire once even with timeline variables', function(){ + + var on_finish_fn = jest.fn(); + + var tvs = [ + {x: 1}, + {x: 2}, + ] + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_finish: on_finish_fn, + timeline_variables: tvs + } + + jsPsych.init({timeline: [mini_timeline]}); + + utils.pressKey('a'); + utils.pressKey('a'); + expect(on_finish_fn.mock.calls.length).toBe(1); + + }) + + test('should fire on every repetition', function(){ + + var on_finish_fn = jest.fn(); + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_finish: on_finish_fn, + repetitions: 2 + } + + jsPsych.init({timeline: [mini_timeline]}); + + utils.pressKey('a'); + utils.pressKey('a'); + expect(on_finish_fn.mock.calls.length).toBe(2); + + }) + + test('should fire before a loop function', function(){ + + var callback = jest.fn().mockImplementation(function(str) {return str;}); + var count = 0; + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_finish: function() {callback('finish');}, + loop_function: function() { + callback('loop'); + count++; + if (count==2) { + return false; + } else { + return true; + } + } + } + + jsPsych.init({timeline: [mini_timeline]}); + + utils.pressKey('a'); + utils.pressKey('a'); + expect(callback.mock.calls.length).toBe(4); + expect(callback.mock.calls[0][0]).toBe('finish'); + expect(callback.mock.calls[1][0]).toBe('loop'); + expect(callback.mock.calls[2][0]).toBe('finish'); + expect(callback.mock.calls[3][0]).toBe('loop'); + + }) + +}) + +describe('on_timeline_start', function(){ + test('should fire once when timeline starts', function(){ + + var on_start_fn = jest.fn(); + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + }, + { + type: 'html-keyboard-response', + stimulus: 'foo' + }, + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_start: on_start_fn + } + + jsPsych.init({timeline: [mini_timeline]}); + + expect(on_start_fn).toHaveBeenCalled(); + utils.pressKey('a'); + utils.pressKey('a'); + utils.pressKey('a'); + expect(on_start_fn.mock.calls.length).toBe(1); + + }) + + test('should fire once even with timeline variables', function(){ + + var on_start_fn = jest.fn(); + + var tvs = [ + {x: 1}, + {x: 2}, + ] + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_start: on_start_fn, + timeline_variables: tvs + } + + jsPsych.init({timeline: [mini_timeline]}); + + expect(on_start_fn).toHaveBeenCalled(); + utils.pressKey('a'); + utils.pressKey('a'); + expect(on_start_fn.mock.calls.length).toBe(1); + + }) + + test('should fire on every repetition', function(){ + + var on_start_fn = jest.fn(); + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_start: on_start_fn, + repetitions: 2 + } + + jsPsych.init({timeline: [mini_timeline]}); + + expect(on_start_fn).toHaveBeenCalled(); + utils.pressKey('a'); + utils.pressKey('a'); + expect(on_start_fn.mock.calls.length).toBe(2); + + }) + + test('should fire after a conditional function', function(){ + + var callback = jest.fn().mockImplementation(function(str) {return str;}); + + var mini_timeline = { + timeline: [ + { + type: 'html-keyboard-response', + stimulus: 'foo' + } + ], + on_timeline_start: function() {callback('start');}, + conditional_function: function() { + callback('conditional'); + return true; + } + } + + jsPsych.init({timeline: [mini_timeline]}); + + expect(callback.mock.calls.length).toBe(2); + expect(callback.mock.calls[0][0]).toBe("conditional"); + expect(callback.mock.calls[1][0]).toBe("start"); + utils.pressKey('a'); + + }) + +}) \ No newline at end of file diff --git a/tests/jsPsych/functions-as-parameters.test.js b/tests/jsPsych/functions-as-parameters.test.js new file mode 100644 index 0000000000..171cb915e3 --- /dev/null +++ b/tests/jsPsych/functions-as-parameters.test.js @@ -0,0 +1,210 @@ +const root = '../../'; +const utils = require('../testing-utils.js'); + +beforeEach(function(){ + require(root + 'jspsych.js'); + require(root + 'plugins/jspsych-html-keyboard-response.js'); + require(root + 'plugins/jspsych-survey-text.js'); +}); + +describe('standard use of function as parameter', function(){ + test('function value is used as parameter', function(){ + var trial = { + type: 'html-keyboard-response', + stimulus: function(){ + return 'foo' + } + } + + jsPsych.init({ + timeline: [trial] + }) + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + utils.pressKey('a'); + }); + + test('function evaluates at runtime', function(){ + var x = 'foo'; + + var trial = { + type: 'html-keyboard-response', + stimulus: function(){ + return x; + } + } + + x = 'bar'; + + jsPsych.init({ + timeline: [trial] + }) + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); + utils.pressKey('a'); + }) + + test('parameters can be protected from early evaluation using jsPsych.plugins.parameterType.FUNCTION', function(){ + require(root + 'plugins/jspsych-cloze.js'); + + var mock = jest.fn(); + + var trial = { + type: 'cloze', + text: '%foo%', + check_answers: true, + mistake_fn: mock + } + + jsPsych.init({timeline: [trial]}); + + expect(mock).not.toHaveBeenCalled(); + utils.clickTarget(document.querySelector('#finish_cloze_button')); + expect(mock).toHaveBeenCalledTimes(1); + }) +}) + +describe('data as function', function(){ + test('entire data object can be function', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + data: function(){ + return {x:1} + } + } + + jsPsych.init({ + timeline: [trial] + }) + + utils.pressKey('a'); + expect(jsPsych.data.get().values()[0].x).toBe(1) + }) + + test('single parameter of data object can be function', function(){ + + var trial = { + type: 'html-keyboard-response', + stimulus: 'foo', + data: { + x: function() { return 1; } + } + } + + jsPsych.init({ + timeline: [trial] + }) + + utils.pressKey('a'); + expect(jsPsych.data.get().values()[0].x).toBe(1) + }) +}) + +describe('nested parameters as functions', function(){ + + test('entire parameter can be a function', function(){ + + var trial = { + type: 'survey-text', + questions: function(){ + return [{prompt: "How old are you?"}, {prompt: "Where were you born?"}] + } + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(jsPsych.getDisplayElement().querySelectorAll('p.jspsych-survey-text').length).toBe(2); + + utils.clickTarget(document.querySelector('#jspsych-survey-text-next')); + + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + }) + + test('nested parameter can be a function', function(){ + + var trial = { + type: 'survey-text', + questions: [{prompt: function(){ return "foo"; }}, {prompt: "bar"}] + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(document.querySelector('#jspsych-survey-text-0 p.jspsych-survey-text').innerHTML).toBe('foo'); + expect(document.querySelector('#jspsych-survey-text-1 p.jspsych-survey-text').innerHTML).toBe('bar'); + utils.clickTarget(document.querySelector('#jspsych-survey-text-next')); + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + }) + + test('multiple nested parameters can be functions', function(){ + require(root + 'plugins/jspsych-survey-multi-choice.js'); + + var trial = { + type: 'survey-multi-choice', + questions: [ + {prompt: function(){ return "foo"; }, options: function() { return ['buzz','fizz']; }}, + {prompt: "bar", options: function() { return ['one','two']; }} + ] + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(document.querySelector('#jspsych-survey-multi-choice-0').innerHTML).toMatch('foo'); + expect(document.querySelector('#jspsych-survey-multi-choice-0').innerHTML).toMatch('buzz'); + expect(document.querySelector('#jspsych-survey-multi-choice-1').innerHTML).toMatch('bar'); + expect(document.querySelector('#jspsych-survey-multi-choice-1').innerHTML).toMatch('one'); + utils.clickTarget(document.querySelector('#jspsych-survey-multi-choice-next')); + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + }) + + test('nested parameters can be protected from early evaluation using jsPsych.plugins.parameterType.FUNCTION', function(){ + // currently no plugins that use this feature (Jan. 2021), so here's a simple placeholder plugin. + jsPsych.plugins['fn-test'] = {}; + jsPsych.plugins['fn-test'].info = { + parameters: { + foo: { + type: jsPsych.plugins.parameterType.COMPLEX, + default: null, + nested: { + not_protected: { + type: jsPsych.plugins.parameterType.STRING, + default: null + }, + protected: { + type: jsPsych.plugins.parameterType.FUNCTION, + default: null + } + } + } + } + } + jsPsych.plugins['fn-test'].trial = function(display_element, trial){ + jsPsych.finishTrial({ + not_protected: trial.foo[0].not_protected, + protected: trial.foo[0].protected + }) + } + + var trial = { + type: 'fn-test', + foo: [{ + not_protected: function(){ return 'x';}, + protected: function() { return 'y';} + }] + } + + jsPsych.init({timeline: [trial]}); + + var data = jsPsych.data.get().values()[0]; + expect(data.not_protected).toBe('x'); + expect(data.protected).not.toBe('y'); + expect(data.protected()).toBe('y'); + }) +}) \ No newline at end of file diff --git a/tests/jsPsych/init.test.js b/tests/jsPsych/init.test.js new file mode 100644 index 0000000000..6f3ea688d5 --- /dev/null +++ b/tests/jsPsych/init.test.js @@ -0,0 +1,48 @@ +require("../../jspsych"); +require("../../plugins/jspsych-html-keyboard-response"); + +describe("jsPsych init", () => { + beforeEach(() => { + document.body.innerHTML = ""; + }); + + function setReadyState(targetState) { + jest + .spyOn(document, "readyState", "get") + .mockImplementation(() => targetState); + } + + function getBodyHTML() { + return document.body.innerHTML; + } + + function init() { + jsPsych.init({ + timeline: [ + { + type: "html-keyboard-response", + stimulus: "foo", + }, + ], + }); + } + + it("should delay execution until the document is ready", () => { + expect(getBodyHTML()).toBe(""); + + setReadyState("loading"); + init(); + expect(getBodyHTML()).toBe(""); + + // Simulate the document getting ready + setReadyState("complete"); + window.dispatchEvent(new Event("load")); + expect(getBodyHTML()).not.toBe(""); + }); + + it("should execute immediately when the document is ready", () => { + // The document is ready by default in jsdom + init(); + expect(getBodyHTML()).not.toBe(""); + }); +}); diff --git a/tests/jsPsych/min-rt.test.js b/tests/jsPsych/min-rt.test.js new file mode 100644 index 0000000000..149c5670fd --- /dev/null +++ b/tests/jsPsych/min-rt.test.js @@ -0,0 +1,58 @@ +const root = '../../'; +const utils = require('../testing-utils.js'); + +// ideally, use fake timers for this test, but 'modern' timers that work +// with performance.now() break something in the first test. wait for fix? +//jest.useFakeTimers('modern'); +//jest.useFakeTimers(); + +beforeEach(function(){ + require(root + 'jspsych.js'); + require(root + 'plugins/jspsych-html-keyboard-response.js'); +}); + +describe('minimum_valid_rt parameter', function(){ + test('has a default value of 0', function(){ + var t = { + type: 'html-keyboard-response', + stimulus: 'foo' + } + + var t2 = { + type: 'html-keyboard-response', + stimulus: 'bar' + } + + jsPsych.init({timeline: [t,t2]}); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); + utils.pressKey('a'); + }); + + test('correctly prevents fast responses when set', function(done){ + var t = { + type: 'html-keyboard-response', + stimulus: 'foo' + } + + var t2 = { + type: 'html-keyboard-response', + stimulus: 'bar' + } + + jsPsych.init({timeline: [t,t2], minimum_valid_rt: 100}); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + setTimeout(function(){ + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); + utils.pressKey('a'); + done(); + }, 100) + + }); +}); \ No newline at end of file diff --git a/tests/jsPsych/progressbar.test.js b/tests/jsPsych/progressbar.test.js index 45e0f95589..538225a17d 100644 --- a/tests/jsPsych/progressbar.test.js +++ b/tests/jsPsych/progressbar.test.js @@ -20,7 +20,7 @@ describe('automatic progress bar', function(){ expect(document.querySelector('#jspsych-progressbar-container')).toBe(null); - utils.pressKey(32); + utils.pressKey('a'); }); test('progress bar displays when show_progress_bar is true', function(){ @@ -36,7 +36,7 @@ describe('automatic progress bar', function(){ expect(document.querySelector('#jspsych-progressbar-container').innerHTML).toMatch('Completion Progress
'); - utils.pressKey(32); + utils.pressKey('a'); }); test('progress bar automatically updates by default', function(){ @@ -52,19 +52,19 @@ describe('automatic progress bar', function(){ expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe(''); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe('25%'); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe('50%'); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe('75%'); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe('100%'); @@ -84,19 +84,19 @@ describe('automatic progress bar', function(){ expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe(''); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe(''); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe(''); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe(''); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe(''); @@ -127,11 +127,11 @@ describe('automatic progress bar', function(){ expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe(''); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe('20%'); - utils.pressKey(32); + utils.pressKey('a'); expect(document.querySelector('#jspsych-progressbar-inner').style.width).toBe('80%'); @@ -160,11 +160,11 @@ describe('automatic progress bar', function(){ auto_update_progress_bar: false }); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getProgressBarCompleted()).toBe(0.2); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getProgressBarCompleted()).toBe(0.8); @@ -181,19 +181,19 @@ describe('automatic progress bar', function(){ show_progress_bar: true }); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getProgressBarCompleted()).toBe(0.25); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getProgressBarCompleted()).toBe(0.50); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getProgressBarCompleted()).toBe(0.75); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getProgressBarCompleted()).toBe(1); diff --git a/tests/jsPsych/timeline-variables.test.js b/tests/jsPsych/timeline-variables.test.js index 5627fb9e90..85f34e062c 100644 --- a/tests/jsPsych/timeline-variables.test.js +++ b/tests/jsPsych/timeline-variables.test.js @@ -47,12 +47,12 @@ describe('sampling', function(){ jsPsych.init({timeline: [trial]}); var last = jsPsych.getDisplayElement().innerHTML; for(var i=0;i<23;i++){ - utils.pressKey(32); + utils.pressKey('a'); var curr = jsPsych.getDisplayElement().innerHTML; expect(last).not.toMatch(curr); last = curr; } - utils.pressKey(32); + utils.pressKey('a'); }) test('sampling functions run when timeline loops', function(){ @@ -89,10 +89,10 @@ describe('sampling', function(){ for(var i=0; i 1', function(){ + var conditional_count = 0; + + var trial = { + timeline: [{ + type: 'html-keyboard-response', + stimulus: 'foo' + }], + repetitions: 2, + conditional_function: function(){ + conditional_count++; + return true; + } + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(conditional_count).toBe(1); + + // first trial + utils.pressKey('a'); + + expect(conditional_count).toBe(1); + + // second trial + utils.pressKey('a'); + + expect(conditional_count).toBe(1); + }) + + test('executes only once when timeline variables are used', function(){ + var conditional_count = 0; + + var trial = { + timeline: [{ + type: 'html-keyboard-response', + stimulus: 'foo' + }], + timeline_variables: [ + {a:1}, + {a:2} + ], + conditional_function: function(){ + conditional_count++; + return true; + } + } + + jsPsych.init({ + timeline: [trial] + }); + + expect(conditional_count).toBe(1); + + // first trial + utils.pressKey('a'); + + expect(conditional_count).toBe(1); + + // second trial + utils.pressKey('a'); + + expect(conditional_count).toBe(1); + }) + test('timeline variables from nested timelines are available', function(){ var trial = { type: 'html-keyboard-response', @@ -227,7 +380,7 @@ describe('conditional function', function(){ var innertimeline = { timeline: [trial], conditional_function: function(){ - if(jsPsych.timelineVariable('word', true) == 'b'){ + if(jsPsych.timelineVariable('word') == 'b'){ return false; } else { return true; @@ -249,15 +402,15 @@ describe('conditional function', function(){ }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('a'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('b'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('c'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); - utils.pressKey(32); + utils.pressKey('a'); }); }); @@ -292,11 +445,11 @@ describe('endCurrentTimeline', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('woo'); - utils.pressKey(32); + utils.pressKey('a'); }); @@ -336,14 +489,81 @@ describe('endCurrentTimeline', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toMatch('woo'); - utils.pressKey(32); + utils.pressKey('a'); }) }); + + +describe('nested timelines', function() { + test('works without other parameters', function() { + var t1 = { + type: 'html-keyboard-response', + stimulus: 'foo' + }; + + var t2 = { + type: 'html-keyboard-response', + stimulus: 'bar' + }; + + var trials = { + timeline: [t1, t2] + }; + + jsPsych.init({ + timeline: [trials] + }); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + + utils.pressKey('a'); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); + + utils.pressKey('a'); + + }) +}) + +describe('add node to end of timeline', function(){ + + test('adds node to end of timeline', function() { + var new_trial = { + type: 'html-keyboard-response', + stimulus: 'bar' + }; + + var new_timeline = { + timeline: [new_trial] + }; + + var timeline = [ + { + type: 'html-keyboard-response', + stimulus: 'foo', + on_start: function() { + jsPsych.addNodeToEndOfTimeline(new_timeline); + } + } + ]; + + jsPsych.init({ + timeline: timeline + }); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + utils.pressKey('a'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); + utils.pressKey('a'); + }); + +}); + diff --git a/tests/media/blue.png b/tests/media/blue.png deleted file mode 100644 index 820bdce8f8..0000000000 Binary files a/tests/media/blue.png and /dev/null differ diff --git a/tests/media/orange.png b/tests/media/orange.png deleted file mode 100644 index 108e6e57c2..0000000000 Binary files a/tests/media/orange.png and /dev/null differ diff --git a/tests/media/sample_video.mp4 b/tests/media/sample_video.mp4 deleted file mode 100644 index 73bbd71a7f..0000000000 Binary files a/tests/media/sample_video.mp4 and /dev/null differ diff --git a/tests/media/sound.mp3 b/tests/media/sound.mp3 deleted file mode 100644 index a58f850cb6..0000000000 Binary files a/tests/media/sound.mp3 and /dev/null differ diff --git a/tests/plugins/plugin-animation.test.js b/tests/plugins/plugin-animation.test.js index 578bac4f9d..147ceb5033 100644 --- a/tests/plugins/plugin-animation.test.js +++ b/tests/plugins/plugin-animation.test.js @@ -20,12 +20,12 @@ describe('animation plugin', function(){ var trial = { type: 'animation', - stimuli: animation_sequence + stimuli: animation_sequence, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(''); diff --git a/tests/plugins/plugin-categorize-animation.test.js b/tests/plugins/plugin-categorize-animation.test.js index f81343890e..fd261c1145 100644 --- a/tests/plugins/plugin-categorize-animation.test.js +++ b/tests/plugins/plugin-categorize-animation.test.js @@ -18,12 +18,12 @@ describe('categorize-animation plugin', function(){ var trial = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68 + key_answer: 'd', + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); jest.runTimersToTime(500); @@ -36,13 +36,13 @@ describe('categorize-animation plugin', function(){ var trial = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68, - prompt: "

Press d if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

" + key_answer: 'd', + prompt: "

Press d if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); jest.runTimersToTime(1500); @@ -53,18 +53,18 @@ describe('categorize-animation plugin', function(){ var trial = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68, - choices: [68, 83], - prompt: "

Press d if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

" + key_answer: 'd', + choices: ['d', 's'], + prompt: "

Press d if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); jest.runTimersToTime(1500); - utils.pressKey(68); + utils.pressKey('d'); jest.runTimersToTime(1000); expect(jsPsych.getDisplayElement().innerHTML).toBe('Correct.'); }); @@ -73,18 +73,18 @@ describe('categorize-animation plugin', function(){ var trial = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68, - choices: [68, 83], - prompt: "

Press d if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

" + key_answer: 'd', + choices: ['d', 's'], + prompt: "

Press d if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); jest.runTimersToTime(1500); - utils.pressKey(83); + utils.pressKey('s'); jest.runTimersToTime(1000); expect(jsPsych.getDisplayElement().innerHTML).toBe('Wrong.'); }); @@ -93,21 +93,21 @@ describe('categorize-animation plugin', function(){ var trials = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_3.jpg'], - key_answer: 68, - choices: [68, 83], + key_answer: 'd', + choices: ['d', 's'], text_answer: 'different', correct_text: "

Correct. The faces had %ANS% expressions.

", incorrect_text: "

Incorrect. The faces had %ANS% expressions.

", - prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false }; jsPsych.init({ - timeline: [trials], - auto_preload: false + timeline: [trials] }); jest.runTimersToTime(1500); - utils.pressKey(68); + utils.pressKey('d'); jest.runTimersToTime(1000); expect(jsPsych.getDisplayElement().innerHTML).toBe('

Correct. The faces had different expressions.

'); }); @@ -116,42 +116,42 @@ describe('categorize-animation plugin', function(){ var trials = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_3.jpg'], - key_answer: 68, - choices: [68, 83], + key_answer: 'd', + choices: ['d', 's'], correct_text: "

You pressed the correct key

", incorrect_text: "

Incorrect.

", - prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false }; jsPsych.init({ - timeline: [trials], - auto_preload: false + timeline: [trials] }); jest.runTimersToTime(1500); - utils.pressKey(68); + utils.pressKey('d'); jest.runTimersToTime(1000); expect(jsPsych.getDisplayElement().innerHTML).toBe('

You pressed the correct key

'); }); - test('correct text displays when when key_answer is pressed', function(){ + test('incorrect text displays when not key_answer is pressed', function(){ var trials = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_3.jpg'], - key_answer: 68, - choices: [68, 83], + key_answer: 'd', + choices: ['d', 's'], correct_text: "

You pressed the correct key

", incorrect_text: "

Incorrect. You pressed the wrong key.

", - prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false }; jsPsych.init({ - timeline: [trials], - auto_preload: false + timeline: [trials] }); jest.runTimersToTime(1500); - utils.pressKey(83); + utils.pressKey('s'); jest.runTimersToTime(1000); expect(jsPsych.getDisplayElement().innerHTML).toBe('

Incorrect. You pressed the wrong key.

'); }); @@ -160,17 +160,17 @@ describe('categorize-animation plugin', function(){ var trials = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68, - choices: [68, 83], + key_answer: 'd', + choices: ['d', 's'], frame_time: 1000, correct_text: "

You pressed the correct key

", incorrect_text: "

Incorrect. You pressed the wrong key.

", - prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false }; jsPsych.init({ - timeline: [trials], - auto_preload: false + timeline: [trials] }); jest.runTimersToTime(1000); @@ -185,18 +185,18 @@ describe('categorize-animation plugin', function(){ var trials = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68, - choices: [68, 83], + key_answer: 'd', + choices: ['d', 's'], frame_time: 1000, sequence_reps: 2, correct_text: "

You pressed the correct key

", incorrect_text: "

Incorrect. You pressed the wrong key.

", - prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false }; jsPsych.init({ - timeline: [trials], - auto_preload: false + timeline: [trials] }); jest.runTimersToTime(1000); @@ -213,23 +213,23 @@ describe('categorize-animation plugin', function(){ var trials = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68, - choices: [68, 83], + key_answer: 'd', + choices: ['d', 's'], frame_time: 1000, sequence_reps: 2, correct_text: "

You pressed the correct key

", incorrect_text: "

Incorrect. You pressed the wrong key.

", prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", - allow_response_before_complete: true, + allow_response_before_complete: true, + render_on_canvas: false }; jsPsych.init({ - timeline: [trials], - auto_preload: false + timeline: [trials] }); jest.runTimersToTime(1500); - utils.pressKey(68); + utils.pressKey('d'); jest.runTimersToTime(500); expect(jsPsych.getDisplayElement().innerHTML).toBe('

You pressed the correct key

'); }); @@ -238,22 +238,22 @@ describe('categorize-animation plugin', function(){ var trials = { type: 'categorize-animation', stimuli: ['img/happy_face_1.jpg', 'img/sad_face_1.jpg'], - key_answer: 68, - choices: [68, 83], + key_answer: 'd', + choices: ['d', 's'], frame_time: 500, feeback_duration: 500, correct_text: "

You pressed the correct key

", incorrect_text: "

Incorrect. You pressed the wrong key.

", - prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + prompt: "

Press D if the faces had different emotional expressions. Press S if the faces had the same emotional expression.

", + render_on_canvas: false }; jsPsych.init({ - timeline: [trials], - auto_preload: false + timeline: [trials] }); jest.runTimersToTime(1500); - utils.pressKey(68); + utils.pressKey('d'); jest.runTimersToTime(500); expect(jsPsych.getDisplayElement().innerHTML).toBe('

You pressed the correct key

'); jest.runTimersToTime(2000); diff --git a/tests/plugins/plugin-cloze.test.js b/tests/plugins/plugin-cloze.test.js index 7520f30357..e9a2e9f999 100644 --- a/tests/plugins/plugin-cloze.test.js +++ b/tests/plugins/plugin-cloze.test.js @@ -136,5 +136,22 @@ describe('cloze', function(){ expect(called).toBeTruthy(); }); - + test('response data is stored as an array', function(){ + var trial = { + type: 'cloze', + text: 'This is a %cloze% text. Here is another cloze response box %%.' + } + + jsPsych.init({ + timeline: [trial] + }); + + document.getElementById('input0').value = 'cloze1'; + document.getElementById('input1').value = 'cloze2'; + utils.clickTarget(document.querySelector('#finish_cloze_button')); + var data = jsPsych.data.get().values()[0].response; + expect(data.length).toBe(2); + expect(data[0]).toBe('cloze1'); + expect(data[1]).toBe('cloze2'); + }); }); \ No newline at end of file diff --git a/tests/plugins/plugin-free-sort.test.js b/tests/plugins/plugin-free-sort.test.js index 6f578042a3..922218d7ad 100644 --- a/tests/plugins/plugin-free-sort.test.js +++ b/tests/plugins/plugin-free-sort.test.js @@ -20,8 +20,7 @@ describe('free-sort plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('src=\"img/happy_face_1.jpg\" data-src=\"img/happy_face_1.jpg\"')); @@ -39,8 +38,7 @@ describe('free-sort plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('class=\"jspsych-free-sort-arena\" style=\"position: relative; width:700px; height:500px;')); @@ -55,11 +53,10 @@ describe('free-sort plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); - expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('This is a prompt

')); @@ -86,11 +82,10 @@ describe('free-sort plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); - expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

This is a prompt

')); + expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); }); }); diff --git a/tests/plugins/plugin-html-keyboard-response.test.js b/tests/plugins/plugin-html-keyboard-response.test.js index 7a9290f898..56ed537833 100644 --- a/tests/plugins/plugin-html-keyboard-response.test.js +++ b/tests/plugins/plugin-html-keyboard-response.test.js @@ -26,7 +26,7 @@ describe('html-keyboard-response', function(){ expect(jsPsych.getDisplayElement().innerHTML).toBe('
this is html
'); - utils.pressKey(70); + utils.pressKey('a'); }); test('display clears after key press', function(){ @@ -42,7 +42,7 @@ describe('html-keyboard-response', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('
this is html
')); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(''); }); @@ -61,7 +61,7 @@ describe('html-keyboard-response', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('
this is html
this is a prompt
')); - utils.pressKey(70); + utils.pressKey('f'); }); test('should hide stimulus if stimulus-duration is set', function(){ @@ -79,7 +79,7 @@ describe('html-keyboard-response', function(){ expect(jsPsych.getDisplayElement().querySelector('#jspsych-html-keyboard-response-stimulus').style.visibility).toMatch(""); jest.runTimersToTime(500); expect(jsPsych.getDisplayElement().querySelector('#jspsych-html-keyboard-response-stimulus').style.visibility).toMatch("hidden"); - utils.pressKey(70); + utils.pressKey('f'); }); test('should end trial when trial duration is reached', function(){ @@ -113,7 +113,7 @@ describe('html-keyboard-response', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('
this is html
')); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(''); }); @@ -132,7 +132,7 @@ describe('html-keyboard-response', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('
this is html
')); - utils.pressKey(70); + utils.pressKey('f'); expect(document.querySelector('#jspsych-html-keyboard-response-stimulus').className).toBe(' responded'); }); diff --git a/tests/plugins/plugin-iat-html.test.js b/tests/plugins/plugin-iat-html.test.js index 54cc0b2552..5c7cebb688 100644 --- a/tests/plugins/plugin-iat-html.test.js +++ b/tests/plugins/plugin-iat-html.test.js @@ -1,4 +1,5 @@ const root = '../../'; +const utils = require('../testing-utils.js'); jest.useFakeTimers(); @@ -33,8 +34,7 @@ describe('iat-html plugin', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch('

dogs

'); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 70})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 70})); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -53,14 +53,10 @@ describe('iat-html plugin', function(){ timeline: [trial] }); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); - + utils.pressKey(' '); expect(jsPsych.getDisplayElement().innerHTML).toMatch('

hello

'); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 70})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 70})); - + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -78,14 +74,10 @@ describe('iat-html plugin', function(){ timeline: [trial] }); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); - + utils.pressKey(' '); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

hello

')); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 74})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 74})); - + utils.pressKey('j'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -106,14 +98,10 @@ describe('iat-html plugin', function(){ timeline: [trial] }); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 70})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 70})); - + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

hello

')); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 32})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 32})); - + utils.pressKey(' '); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -134,14 +122,10 @@ describe('iat-html plugin', function(){ timeline: [trial] }); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 74})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 74})); - + utils.pressKey('j'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

hello

')); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 70})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 70})); - + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -165,9 +149,7 @@ describe('iat-html plugin', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

Press j for:
UNFRIENDLY')); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

Press f for:
FRIENDLY')); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode:70})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode:70})); - + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -193,14 +175,10 @@ describe('iat-html plugin', function(){ expect(jsPsych.getDisplayElement().querySelector('#wrongImgContainer').style.visibility).toBe('hidden'); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode:74})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode:74})); - + utils.pressKey('j'); expect(jsPsych.getDisplayElement().querySelector('#wrongImgContainer').style.visibility).toBe('visible'); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 70})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 70})); - + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -211,7 +189,8 @@ describe('iat-html plugin', function(){ stimulus: '

hello

', display_feedback: false, response_ends_trial: false, - trial_duration: 500 + trial_duration: 500, + stim_key_association: 'left' } jsPsych.init({ @@ -244,9 +223,7 @@ describe('iat-html plugin', function(){ timeline: [trial] }); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 70})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 70})); - + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

hello

')); jest.runAllTimers(); @@ -276,9 +253,7 @@ describe('iat-html plugin', function(){ jest.runTimersToTime(500); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 73})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 73})); - + utils.pressKey('i'); expect(jsPsych.getDisplayElement().querySelector('#wrongImgContainer').style.visibility).toBe('visible'); jest.runTimersToTime(1100); @@ -309,9 +284,7 @@ describe('iat-html plugin', function(){ expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

hello

')); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 73})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 73})); - + utils.pressKey('i'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

hello

')); jest.runTimersToTime(1000); @@ -320,9 +293,7 @@ describe('iat-html plugin', function(){ jest.runTimersToTime(1500); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: 69})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: 69})); - + utils.pressKey('e'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); }); diff --git a/tests/plugins/plugin-iat-image.test.js b/tests/plugins/plugin-iat-image.test.js index b8cdfb2a19..deab8091c9 100644 --- a/tests/plugins/plugin-iat-image.test.js +++ b/tests/plugins/plugin-iat-image.test.js @@ -29,13 +29,12 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(/blue.png/); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -51,14 +50,13 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); - utils.pressKey(74); + utils.pressKey('j'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(''); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -73,14 +71,13 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); - utils.pressKey(74); + utils.pressKey('j'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -98,14 +95,13 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); - utils.pressKey(32); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -123,14 +119,13 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); - utils.pressKey(74); + utils.pressKey('j'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -148,14 +143,13 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

Press j for:
UNFRIENDLY')); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('

Press f for:
FRIENDLY')); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -176,15 +170,14 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().querySelector('#wrongImgContainer').style.visibility).toBe('hidden'); - utils.pressKey(74); + utils.pressKey('j'); expect(jsPsych.getDisplayElement().querySelector('#wrongImgContainer').style.visibility).toBe('visible'); - utils.pressKey(70); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); @@ -195,12 +188,12 @@ describe('iat-image plugin', function(){ stimulus: '../media/blue.png', display_feedback: false, response_ends_trial: false, + stim_key_association: 'left', trial_duration: 500 } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); @@ -226,11 +219,10 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); - utils.pressKey(70); + utils.pressKey('f'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); jest.runAllTimers(); @@ -253,15 +245,14 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); jest.runTimersToTime(500); - utils.pressKey(73); + utils.pressKey('i'); expect(jsPsych.getDisplayElement().querySelector('#wrongImgContainer').style.visibility).toBe('visible'); jest.runTimersToTime(1100); @@ -287,13 +278,12 @@ describe('iat-image plugin', function(){ } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); - utils.pressKey(73); + utils.pressKey('i'); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); jest.runTimersToTime(1000); @@ -302,7 +292,7 @@ describe('iat-image plugin', function(){ jest.runTimersToTime(1500); - utils.pressKey(69); + utils.pressKey('a'); expect(jsPsych.getDisplayElement().innerHTML).toBe(""); }); }); diff --git a/tests/plugins/plugin-image-button-response.test.js b/tests/plugins/plugin-image-button-response.test.js index fb5eade540..d22122128d 100644 --- a/tests/plugins/plugin-image-button-response.test.js +++ b/tests/plugins/plugin-image-button-response.test.js @@ -18,12 +18,12 @@ describe('image-button-response', function(){ var trial = { type: 'image-button-response', stimulus: '../media/blue.png', - choices: ['button-choice'] + choices: ['button-choice'], + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('button-choice1')); @@ -51,11 +51,11 @@ describe('image-button-response', function(){ stimulus: '../media/blue.png', choices: ['buttonChoice'], button_html: '', + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(new RegExp('')); @@ -65,12 +65,12 @@ describe('image-button-response', function(){ var trial = { type: 'image-button-response', stimulus: '../media/blue.png', - choices: ['button-choice'], + choices: ['button-choice'], + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('This is a prompt

' + prompt: '

This is a prompt

', + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('

This is a prompt

'); @@ -101,12 +101,12 @@ describe('image-button-response', function(){ type: 'image-button-response', stimulus: '../media/blue.png', choices: ['button-choice'], - stimulus_duration: 500 + stimulus_duration: 500, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().querySelector('#jspsych-image-button-response-stimulus').style.visibility).toMatch(""); @@ -119,12 +119,12 @@ describe('image-button-response', function(){ type: 'image-button-response', stimulus: '../media/blue.png', choices: ['f','j'], - trial_duration: 500 + trial_duration: 500, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('this is a prompt
' + prompt: '
this is a prompt
', + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('
this is a prompt
'); - utils.pressKey(70); + utils.pressKey('f'); }); test('should hide stimulus if stimulus-duration is set', function(){ @@ -72,17 +72,17 @@ describe('image-keyboard-response', function(){ stimulus: '../media/blue.png', choices:['f','j'], stimulus_duration: 500, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().querySelector('#jspsych-image-keyboard-response-stimulus').style.visibility).toMatch(""); jest.runTimersToTime(500); expect(jsPsych.getDisplayElement().querySelector('#jspsych-image-keyboard-response-stimulus').style.visibility).toMatch("hidden"); - utils.pressKey(70); + utils.pressKey('f'); }); @@ -91,12 +91,12 @@ describe('image-keyboard-response', function(){ type: 'image-keyboard-response', stimulus: '../media/blue.png', choices: ['f','j'], - trial_duration: 500 + trial_duration: 500, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('left'); @@ -57,12 +57,12 @@ describe('image-slider-response', function(){ type: 'image-slider-response', stimulus: '../media/blue.png', labels: ['left', 'right'], - button_label: 'button' + button_label: 'button', + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch(''); @@ -79,11 +79,11 @@ describe('image-slider-response', function(){ min: 2, max: 10, step: 2, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().querySelector('#jspsych-image-slider-response-response').min).toBe('2'); @@ -99,12 +99,12 @@ describe('image-slider-response', function(){ stimulus: '../media/blue.png', labels: ['left', 'right'], button_label: 'button', - prompt: '

This is a prompt

' + prompt: '

This is a prompt

', + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('

This is a prompt

'); @@ -118,12 +118,12 @@ describe('image-slider-response', function(){ stimulus: '../media/blue.png', labels: ['left', 'right'], button_label: 'button', - stimulus_duration: 500 + stimulus_duration: 500, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().querySelector('#jspsych-image-slider-response-stimulus').style.visibility).toMatch(""); @@ -138,13 +138,12 @@ describe('image-slider-response', function(){ stimulus: '../media/blue.png', labels: ['left', 'right'], button_label: 'button', - trial_duration: 500 - + trial_duration: 500, + render_on_canvas: false } jsPsych.init({ - timeline: [trial], - auto_preload: false + timeline: [trial] }); expect(jsPsych.getDisplayElement().innerHTML).toMatch('
{ cb(); }); + + var preload = { + type: 'preload', + auto_preload: true + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + test('auto_preload method works with simple timeline and audio stimulus', function () { + + require(root + 'plugins/jspsych-audio-keyboard-response.js'); + + jsPsych.pluginAPI.preloadAudio = jest.fn((x, cb) => { cb(); }); + + var preload = { + type: 'preload', + auto_preload: true + } + + var trial = { + type: 'audio-keyboard-response', + stimulus: 'sound/foo.mp3', + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + expect(jsPsych.pluginAPI.preloadAudio.mock.calls[0][0]).toStrictEqual(['sound/foo.mp3']); + + }); + + test('auto_preload method works with simple timeline and video stimulus', function () { + + require(root + 'plugins/jspsych-video-keyboard-response.js'); + + jsPsych.pluginAPI.preloadVideo = jest.fn((x, cb) => { cb(); }); + + var preload = { + type: 'preload', + auto_preload: true + } + + var trial = { + type: 'video-keyboard-response', + stimulus: 'video/foo.mp4' + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + expect(jsPsych.pluginAPI.preloadVideo.mock.calls[0][0]).toStrictEqual(['video/foo.mp4']); + + }); + + test('auto_preload method works with nested timeline', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var preload = { + type: 'preload', + auto_preload: true + } + + var trial = { + type: 'image-keyboard-response', + render_on_canvas: false, + timeline: [ + {stimulus: 'img/foo.png'} + ] + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + test('auto_preload method works with looping timeline', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var preload = { + type: 'preload', + auto_preload: true + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var count = 0; + var loop = { + timeline: [trial], + loop_function: function() { + if (count == 0) { + return true; + } else { + return false; + } + } + } + + jsPsych.init({ + timeline: [preload, loop] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + test('auto_preload method works with conditional timeline', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var preload = { + type: 'preload', + auto_preload: true + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var count = 0; + var conditional = { + timeline: [trial], + conditional_function: function() { + if (count == 0) { + return true; + } else { + return false; + } + } + } + + jsPsych.init({ + timeline: [preload, conditional] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + test('auto_preload method works with timeline variables when stim is statically defined in trial object', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var preload = { + type: 'preload', + auto_preload: true + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false, + data: jsPsych.timelineVariable('data') + } + + var trial_procedure = { + timeline: [trial], + timeline_variables: [ + {data: {trial: 1}}, + {data: {trial: 2}}, + {data: {trial: 3}} + ] + } + + jsPsych.init({ + timeline: [preload, trial_procedure] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + }); + + describe('trials parameter', function() { + + test('trials parameter works with simple timeline', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var preload = { + type: 'preload', + trials: [trial] + } + + jsPsych.init({ + timeline: [preload] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + test('trials parameter works with looping timeline', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var count = 0; + var loop = { + timeline: [trial], + loop_function: function() { + if (count == 0) { + return true; + } else { + return false; + } + } + } + + var preload = { + type: 'preload', + trials: [loop] + } + + jsPsych.init({ + timeline: [preload] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + test('trials parameter works with conditional timeline', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var count = 0; + var conditional = { + timeline: [trial], + conditional_function: function() { + if (count == 0) { + return true; + } else { + return false; + } + } + } + + var preload = { + type: 'preload', + trials: [conditional] + } + + jsPsych.init({ + timeline: [preload] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + test('trials parameter works with timeline variables when stim is statically defined in trial object', function () { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false, + data: jsPsych.timelineVariable('data') + } + + var trial_procedure = { + timeline: [trial], + timeline_variables: [ + {data: {trial: 1}}, + {data: {trial: 2}}, + {data: {trial: 3}} + ] + } + + var preload = { + type: 'preload', + trials: [trial_procedure] + } + + jsPsych.init({ + timeline: [preload] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + }); + + describe('calls to pluginAPI preload functions', function() { + + test('auto_preload, trials, and manual preload array parameters can be used together', function () { + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var trial_1 = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var trial_2 = { + type: 'image-keyboard-response', + stimulus: 'img/bar.png', + render_on_canvas: false + } + + var preload = { + type: 'preload', + auto_preload: true, + trials: [trial_2], + images: ['img/fizz.png'] + } + + jsPsych.init({ + timeline: [preload, trial_1] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls.length).toBe(1); + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0].length).toBe(3); + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toContain('img/foo.png'); + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toContain('img/bar.png'); + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toContain('img/fizz.png'); + + }); + + test('plugin only attempts to load duplicate files once', function () { + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb) => { cb(); }); + + var trial_1 = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var trial_2 = { + type: 'image-keyboard-response', + stimulus: 'img/foo.png', + render_on_canvas: false + } + + var preload = { + type: 'preload', + trials: [trial_2], + images: ['img/foo.png'] + } + + jsPsych.init({ + timeline: [preload, trial_1] + }); + + expect(jsPsych.pluginAPI.preloadImages.mock.calls.length).toBe(1); + expect(jsPsych.pluginAPI.preloadImages.mock.calls[0][0]).toStrictEqual(['img/foo.png']); + + }); + + }); + + describe('continue_after_error and error messages', function() { + + test('experiment continues when image loads successfully', function() { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb_complete, cb_load, cb_error) => { cb_load(); cb_complete(); }); + + var preload = { + type: 'preload', + auto_preload: true, + error_message: 'foo', + max_load_time: 100 + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'image.png', + render_on_canvas: false + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + + expect(jsPsych.getDisplayElement().innerHTML).toMatch(' { + cb_error({ + source: x, + error: { + } + }); + }); + + var preload = { + type: 'preload', + auto_preload: true, + error_message: 'foo', + max_load_time: 100, + on_error: function(e) { + expect(e).toContain('img/bar.png'); + } + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'img/bar.png', + render_on_canvas: false + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + + }); + + test('error_message is shown when continue_after_error is false and loading times out', function() { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + jest.useFakeTimers(); + + var mock_fn = jest.fn(function(x) {return x;}); + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb_complete, cb_load, cb_error) => { + // don't return anything here to simulate waiting forever for image to load + }); + + + var preload = { + type: 'preload', + auto_preload: true, + error_message: 'foo', + max_load_time: 100, + on_error: function(e) { + mock_fn(e); + } + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'blue.png', + render_on_canvas: false + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + jest.advanceTimersByTime(101); + + expect(mock_fn).toHaveBeenCalledWith('timeout'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('foo'); + + + }); + + test('experiment continues when continue_after_error is true and files fail', function() { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + var mock_fn = jest.fn(function(x) {return x;}); + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb_complete, cb_load, cb_error) => { + cb_error({ + source: x, + error: { + } + }); + }); + + var preload = { + type: 'preload', + images: ['img/foo.png'], + error_message: 'bar', + max_load_time: null, + continue_after_error: true, + on_error: function(e) { + mock_fn('loading failed'); + } + } + + var trial = { + type: 'image-keyboard-response', + stimulus: 'blue.png', + render_on_canvas: false + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + + expect(mock_fn).toHaveBeenCalledWith('loading failed'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch(' { + // don't return anything here to simulate waiting forever for image to load + }); + + var preload = { + type: 'preload', + auto_preload: true, + error_message: 'bar', + max_load_time: 100, + continue_after_error: true, + on_error: function(e) { + mock_fn(e); + } + } + + var trial = { + type: 'image-keyboard-response', + stimulus: '../media/blue.png', + render_on_canvas: false + } + + jsPsych.init({ + timeline: [preload, trial] + }); + + jest.advanceTimersByTime(101); + + expect(mock_fn).toHaveBeenCalledWith('timeout'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch(' { + cb_error({ + source: x, + error: { + } + }); + }); + + var preload = { + type: 'preload', + images: ['img/foo.png'], + error_message: 'bar', + show_detailed_errors: true, + on_error: function(e) { + mock_fn('loading failed'); + } + } + + jsPsych.init({ + timeline: [preload] + }); + + + expect(mock_fn).toHaveBeenCalledWith('loading failed'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('Error details'); + + + }); + + }); + + describe('display while loading', function() { + + test('custom loading message is shown above progress bar if specified', function() { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + var preload = { + type: 'preload', + images: ['img/foo.png'], + message: 'bar', + max_load_time: 100 + } + + jsPsych.init({ + timeline: [preload] + }); + + expect(jsPsych.getDisplayElement().innerHTML).toMatch('bar'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('
{ + if(x.includes('blue.png')){ + cb_load(); + cb_complete(); + } else { + cb_error({ + source: x, + error: { + } + }); + } + }); + jsPsych.pluginAPI.preloadVideo = jest.fn((x, cb_complete, cb_load, cb_error) => { + cb_error({ + source: x, + error: { + } + }); + }); + jsPsych.pluginAPI.preloadAudio = jest.fn((x, cb_complete, cb_load, cb_error) => { + cb_error({ + source: x, + error: { + } + }); + }); + + var preload_1 = { + type: 'preload', + images: ['foo.png'], + audio: ['bar.mp3'], + video: ['buzz.mp4'], + continue_after_error: true, + on_error: function(e) { + mock_fn('loading failed'); + }, + on_success: function(e) { + mock_fn('loading succeeded'); + } + } + + var preload_2 = { + type: 'preload', + images: ['blue.png'], + max_load_time: 100, + on_error: function(e) { + mock_fn('loading failed'); + }, + on_success: function(e) { + mock_fn('loading succeeded'); + } + } + + jsPsych.init({ + timeline: [preload_1, preload_2] + }); + + + expect(mock_fn.mock.calls[0][0]).toBe('loading failed'); + expect(mock_fn.mock.calls[1][0]).toBe('loading failed'); + expect(mock_fn.mock.calls[2][0]).toBe('loading failed'); + expect(mock_fn.mock.calls[3][0]).toBe('loading succeeded'); + + + }); + + test('on_error/on_success callbacks are not called after loading times out', function() { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + var mock_fn = jest.fn(function(x) {return x;}); + var cancel_preload_spy = jest.spyOn(jsPsych.pluginAPI, 'cancelPreloads'); + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb_complete, cb_load, cb_error) => { + // empty to simulate timeout + }); + jsPsych.pluginAPI.preloadVideo = jest.fn((x, cb_complete, cb_load, cb_error) => { + // empty to simulate timeout + }); + jsPsych.pluginAPI.preloadAudio = jest.fn((x, cb_complete, cb_load, cb_error) => { + // empty to simulate timeout + }); + jest.useFakeTimers(); + + var preload = { + type: 'preload', + images: ['img/foo.png', 'blue.png'], + audio: ['audio/bar.mp3'], + video: ['video/buzz.mp4'], + continue_after_error: true, + max_load_time: 100, + on_error: function(e) { + if (e == "timeout") { + mock_fn(e); + } else { + mock_fn('loading failed'); + } + }, + on_success: function(e) { + mock_fn('loading succeeded'); + } + } + + jsPsych.init({ + timeline: [preload] + }); + + jest.advanceTimersByTime(101); + + expect(mock_fn).toHaveBeenCalledWith('timeout'); + expect(mock_fn).toHaveBeenLastCalledWith('timeout'); + expect(cancel_preload_spy).toHaveBeenCalled(); + + + }); + + test('experiment stops with default error_message and on_error/on_success callbacks are not called after preload trial ends with error', function() { + + require(root + 'plugins/jspsych-image-keyboard-response.js'); + + var mock_fn = jest.fn(function(x) {return x;}); + var cancel_preload_spy = jest.spyOn(jsPsych.pluginAPI,'cancelPreloads'); + jest.useFakeTimers(); + jsPsych.pluginAPI.preloadImages = jest.fn((x, cb_complete, cb_load, cb_error) => { + if(x.includes('blue.png')){ + cb_load(); + cb_complete(); + } else { + + } + }); + jsPsych.pluginAPI.preloadVideo = jest.fn((x, cb_complete, cb_load, cb_error) => { + + }); + jsPsych.pluginAPI.preloadAudio = jest.fn((x, cb_complete, cb_load, cb_error) => { + + }); + + var preload_1 = { + type: 'preload', + images: ['img/foo.png'], + audio: ['audio/bar.mp3'], + video: ['video/buzz.mp4'], + max_load_time: 100, + on_error: function(e) { + if (e == 'timeout') { + mock_fn(e); + } else { + mock_fn('loading failed'); + } + }, + on_success: function(e) { + mock_fn('loading succeeded'); + } + } + + var preload_2 = { + type: 'preload', + images: ['../media/blue.png'], + max_load_time: 100, + on_error: function(e) { + mock_fn('loading failed'); + }, + on_success: function(e) { + mock_fn('loading succeeded'); + } + } + + jsPsych.init({ + timeline: [preload_1, preload_2] + }); + + jest.advanceTimersByTime(101); + + expect(mock_fn).toHaveBeenCalledWith('timeout'); + expect(mock_fn).toHaveBeenLastCalledWith('timeout'); + expect(jsPsych.getDisplayElement().innerHTML).toMatch('The experiment failed to load.'); + expect(cancel_preload_spy).toHaveBeenCalled(); + + + }); + + }); + +}); diff --git a/tests/plugins/plugin-rdk.test.js b/tests/plugins/plugin-rdk.test.js index 30c3395237..d11cc03c9c 100644 --- a/tests/plugins/plugin-rdk.test.js +++ b/tests/plugins/plugin-rdk.test.js @@ -14,4 +14,48 @@ describe('rdk plugin', function(){ expect(typeof window.jsPsych.plugins['rdk']).not.toBe('undefined'); }); + test('choices and frame data are stored as arrays', function(){ + var trial = { + type: 'rdk', + number_of_dots: 200, + RDK_type: 3, + choices: ['a', 'l'], + correct_choice: 'l', + coherent_direction: 0 + } + + jsPsych.init({ + timeline: [trial] + }); + + utils.pressKey('l') + var data = jsPsych.data.get().values()[0]; + expect(Array.isArray(data.choices)).toBe(true); + expect(data.choices).toStrictEqual(['a', 'l']); + expect(Array.isArray(data.frame_rate_array)).toBe(true); + }); + + test('responses are scored correctly', function(){ + var trial = { + type: 'rdk', + number_of_dots: 200, + RDK_type: 3, + choices: ['a', 'l'], + correct_choice: 'l', + coherent_direction: 0 + } + jsPsych.init({ + timeline: [trial,trial] + }); + + utils.pressKey('l'); + utils.pressKey('a'); + + var data = jsPsych.data.get().values(); + expect(data[0].response).toBe('l'); + expect(data[0].correct).toBe(true); + expect(data[1].response).toBe('a'); + expect(data[1].correct).toBe(false); + }); + }); diff --git a/tests/plugins/plugin-serial-reaction-time.test.js b/tests/plugins/plugin-serial-reaction-time.test.js index 5df9ad0819..80d2798a71 100644 --- a/tests/plugins/plugin-serial-reaction-time.test.js +++ b/tests/plugins/plugin-serial-reaction-time.test.js @@ -30,7 +30,7 @@ describe('serial-reaction-time plugin', function(){ expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-2').style.backgroundColor).toBe(''); expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-3').style.backgroundColor).toBe(''); - utils.pressKey(51); + utils.pressKey('3'); expect(jsPsych.getDisplayElement().innerHTML).toBe(''); expect(jsPsych.data.get().last(1).values()[0].correct).toBe(true); @@ -55,14 +55,54 @@ describe('serial-reaction-time plugin', function(){ expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-2').style.backgroundColor).toBe(''); expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-3').style.backgroundColor).toBe(''); - utils.pressKey(51); + utils.pressKey('3'); expect(jsPsych.getDisplayElement().innerHTML).not.toBe(''); jest.runTimersToTime(1000); expect(jsPsych.getDisplayElement().innerHTML).toBe(''); - //expect(jsPsych.data.get().last(1).values()[0].correct).toBe(true); + expect(jsPsych.data.get().last(1).values()[0].correct).toBe(true); + + }); + + test('responses are scored correctly', function(){ + + var trial1 = { + type: 'serial-reaction-time', + target: [0,0] + } + + var trial2 = { + type: 'serial-reaction-time', + target: [0,1] + } + + jsPsych.init({ + timeline: [trial1, trial2] + }); + + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-0').style.backgroundColor).toBe('rgb(153, 153, 153)'); + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-1').style.backgroundColor).toBe(''); + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-2').style.backgroundColor).toBe(''); + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-3').style.backgroundColor).toBe(''); + + utils.pressKey('3'); + + jest.runAllTimers(); + + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-0').style.backgroundColor).toBe(''); + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-1').style.backgroundColor).toBe('rgb(153, 153, 153)'); + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-2').style.backgroundColor).toBe(''); + expect(document.querySelector('#jspsych-serial-reaction-time-stimulus-cell-0-3').style.backgroundColor).toBe(''); + + utils.pressKey('3'); + + expect(jsPsych.getDisplayElement().innerHTML).toBe(''); + + var trial_data = jsPsych.data.get().last(2).values(); + expect(trial_data[0].correct).toBe(true); + expect(trial_data[1].correct).toBe(false); }); diff --git a/tests/plugins/plugin-survey-html-form.test.js b/tests/plugins/plugin-survey-html-form.test.js index d73ef2ba70..a5f6acd632 100644 --- a/tests/plugins/plugin-survey-html-form.test.js +++ b/tests/plugins/plugin-survey-html-form.test.js @@ -37,7 +37,7 @@ describe('survey-html-form plugin', function(){ expect(jsPsych.getDisplayElement().innerHTML).toBe(''); // Check whether data is parsed properly - var data = JSON.parse(jsPsych.data.get().values()[0].responses) + var data = jsPsych.data.get().values()[0].response; expect(data.second).toBe(TEST_VALUE) }); diff --git a/tests/plugins/plugin-survey-likert.test.js b/tests/plugins/plugin-survey-likert.test.js index 73c241b6c1..a67783f7a2 100644 --- a/tests/plugins/plugin-survey-likert.test.js +++ b/tests/plugins/plugin-survey-likert.test.js @@ -37,7 +37,7 @@ describe('survey-likert plugin', function(){ utils.clickTarget(document.querySelector('#jspsych-survey-likert-next')); - var survey_data = JSON.parse(jsPsych.data.get().values()[0].responses); + var survey_data = jsPsych.data.get().values()[0].response; expect(survey_data.Q0).toBe(0); expect(survey_data.Q1).toBe(1); expect(survey_data.Q2).toBe(2); diff --git a/tests/plugins/plugin-survey-multi-choice.test.js b/tests/plugins/plugin-survey-multi-choice.test.js index 600d3fec98..71d630a792 100644 --- a/tests/plugins/plugin-survey-multi-choice.test.js +++ b/tests/plugins/plugin-survey-multi-choice.test.js @@ -37,8 +37,7 @@ describe('survey-multi-choice plugin', function(){ utils.clickTarget(document.querySelector('#jspsych-survey-multi-choice-next')); - var survey_data = JSON.parse(jsPsych.data.get().values()[0].responses); - console.log(jsPsych.data.get().json()) + var survey_data = jsPsych.data.get().values()[0].response; expect(survey_data.Q0).toBe('a'); expect(survey_data.Q1).toBe('b'); expect(survey_data.Q2).toBe('c'); diff --git a/tests/plugins/plugin-survey-multi-select.test.js b/tests/plugins/plugin-survey-multi-select.test.js index e769e6512a..b96393ec9b 100644 --- a/tests/plugins/plugin-survey-multi-select.test.js +++ b/tests/plugins/plugin-survey-multi-select.test.js @@ -60,8 +60,7 @@ describe('survey-multi-select plugin', function(){ utils.clickTarget(document.querySelector('#jspsych-survey-multi-select-next')); - var survey_data = JSON.parse(jsPsych.data.get().values()[0].responses); - console.log(jsPsych.data.get().json()) + var survey_data = jsPsych.data.get().values()[0].response; expect(survey_data.Q0[0]).toBe('a'); expect(survey_data.Q1[0]).toBe('b'); expect(survey_data.Q2[0]).toBe('c'); diff --git a/tests/plugins/plugin-survey-text.test.js b/tests/plugins/plugin-survey-text.test.js index a59c0c1aef..9acb479c61 100644 --- a/tests/plugins/plugin-survey-text.test.js +++ b/tests/plugins/plugin-survey-text.test.js @@ -104,7 +104,7 @@ describe('survey-text plugin', function(){ utils.clickTarget(document.querySelector('#jspsych-survey-text-next')); - var survey_data = JSON.parse(jsPsych.data.get().values()[0].responses); + var survey_data = jsPsych.data.get().values()[0].response; expect(survey_data.Q0).toBe('a0'); expect(survey_data.Q1).toBe('a1'); expect(survey_data.Q2).toBe('a2'); diff --git a/tests/plugins/plugin-video-button-response.test.js b/tests/plugins/plugin-video-button-response.test.js index 7a0c2ad7bc..eaeeda5820 100644 --- a/tests/plugins/plugin-video-button-response.test.js +++ b/tests/plugins/plugin-video-button-response.test.js @@ -6,11 +6,27 @@ describe('video-button-response plugin', function(){ beforeEach(function(){ require(root + 'jspsych.js'); - require(root + 'plugins/jspsych-video-button-response.js'); + // don't load plugin here - need to spy on registerPreload before its called }); test('loads correctly', function(){ + require(root + 'plugins/jspsych-video-button-response.js'); expect(typeof window.jsPsych.plugins['video-button-response']).not.toBe('undefined'); }); + test('video preloading registers correctly', function(){ + const preload_spy = jest.spyOn(jsPsych.pluginAPI, 'registerPreload'); + require(root + 'plugins/jspsych-video-button-response.js'); + var trial = { + type: 'video-button-response', + stimulus: ['vid.mp4'], + choices: ['y'] + } + jsPsych.init({ + timeline: [trial] + }); + expect(preload_spy).toHaveBeenCalled(); + preload_spy.mockRestore(); + }); + }); diff --git a/tests/plugins/plugin-video-keyboard-response.test.js b/tests/plugins/plugin-video-keyboard-response.test.js index 9ca09bd522..6f389aca53 100644 --- a/tests/plugins/plugin-video-keyboard-response.test.js +++ b/tests/plugins/plugin-video-keyboard-response.test.js @@ -6,11 +6,27 @@ describe('video-keyboard-response plugin', function(){ beforeEach(function(){ require(root + 'jspsych.js'); - require(root + 'plugins/jspsych-video-keyboard-response.js'); + // don't load plugin here - need to spy on registerPreload before its called }); test('loads correctly', function(){ + require(root + 'plugins/jspsych-video-keyboard-response.js'); expect(typeof window.jsPsych.plugins['video-keyboard-response']).not.toBe('undefined'); }); + test('video preloading registers correctly', function(){ + const preload_spy = jest.spyOn(jsPsych.pluginAPI, 'registerPreload'); + require(root + 'plugins/jspsych-video-keyboard-response.js'); + var trial = { + type: 'video-keyboard-response', + stimulus: ['video.mp4'], + choices: jsPsych.ALL_KEYS + } + jsPsych.init({ + timeline: [trial] + }); + expect(preload_spy).toHaveBeenCalled(); + preload_spy.mockRestore(); + }); + }); diff --git a/tests/plugins/plugin-video-slider-response.test.js b/tests/plugins/plugin-video-slider-response.test.js index add93c8c02..2dec34e40c 100644 --- a/tests/plugins/plugin-video-slider-response.test.js +++ b/tests/plugins/plugin-video-slider-response.test.js @@ -6,11 +6,26 @@ describe('video-slider-response plugin', function(){ beforeEach(function(){ require(root + 'jspsych.js'); - require(root + 'plugins/jspsych-video-slider-response.js'); + // don't load plugin here - need to spy on registerPreload before its called }); test('loads correctly', function(){ + require(root + 'plugins/jspsych-video-slider-response.js'); expect(typeof window.jsPsych.plugins['video-slider-response']).not.toBe('undefined'); }); + test('video preloading registers correctly', function(){ + const preload_spy = jest.spyOn(jsPsych.pluginAPI, 'registerPreload'); + require(root + 'plugins/jspsych-video-slider-response.js'); + var trial = { + type: 'video-slider-response', + stimulus: ['video.mp4'] + } + jsPsych.init({ + timeline: [trial] + }); + expect(preload_spy).toHaveBeenCalled(); + preload_spy.mockRestore(); + }); + }); diff --git a/tests/testing-utils.js b/tests/testing-utils.js index 7473614a59..b85f6c5d39 100644 --- a/tests/testing-utils.js +++ b/tests/testing-utils.js @@ -1,6 +1,6 @@ exports.pressKey = function(key){ - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {keyCode: key})); - document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {keyCode: key})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keydown', {key: key})); + document.querySelector('.jspsych-display-element').dispatchEvent(new KeyboardEvent('keyup', {key: key})); } exports.mouseDownMouseUpTarget = function(target){ diff --git a/tests/timing-tests/calibration-timeline.js b/tests/timing-tests/calibration-timeline.js new file mode 100644 index 0000000000..341a7d424c --- /dev/null +++ b/tests/timing-tests/calibration-timeline.js @@ -0,0 +1,15 @@ +var calibration = { + timeline: [{ + type: 'html-keyboard-response', + stimulus: `
`, + trial_duration: 200, + post_trial_gap: 100 + }], + loop_function: function(data){ + if(data.values()[0].response == ' '){ + return false; + } else { + return true; + } + } +} \ No newline at end of file diff --git a/tests/timing-tests/square-flicker.html b/tests/timing-tests/square-flicker.html new file mode 100644 index 0000000000..0aa07e93ad --- /dev/null +++ b/tests/timing-tests/square-flicker.html @@ -0,0 +1,42 @@ + + + + + + + + + + + + + \ No newline at end of file