You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<tr><td><em>chunkSize</em></td><td>Sets chunk size for the compression. Must be less than maxSize.</td></tr>
81
81
<tr><td><em>compressionLevel</em></td><td>Level of compression ranging from 0-9 where 9 is the highest level of compression. The default is level 3.</td></tr>
82
82
<tr><td><em>offset</em></td><td>Axis offset of dataset to append. May be used to overwrite data.</td></tr></table>
83
-
</p><h2id="6">Chunking</h2><p>HDF5 Datasets can be either stored in continuous or chunked mode. Continuous means that all of the data is written to one continuous block on the hard drive, and chunked means that the dataset is automatically split into chunks that are distributed across the hard drive. The user does not need to know the mode used- HDF5 handles the gathering of chunks automatically. However, it is worth understanding these chunks because they can have a big impact on space used and read and write speed. When using compression, the dataset MUST be chunked. HDF5 is not able to apply compression to continuous datasets.</p><p>If chunkSize is not explicitly specified, dataPipe will determine an appropriate chunk size. However, you can optimize the performance of the compression by manually specifying the chunk size using <i>chunkSize</i> argument.</p><p>We can demonstrate the benefit of chunking by exploring the following scenario. The following code utilizes DataPipe’s default chunk size:</p><preclass="codeinput">fData=randi(250, 1000, 1000); <spanclass="comment">% Create fake data</span>
83
+
</p><h2id="6">Chunking</h2><p>HDF5 Datasets can be either stored in continuous or chunked mode. Continuous means that all of the data is written to one continuous block on the hard drive, and chunked means that the dataset is automatically split into chunks that are distributed across the hard drive. The user does not need to know the mode used- HDF5 handles the gathering of chunks automatically. However, it is worth understanding these chunks because they can have a big impact on space used and read and write speed. When using compression, the dataset MUST be chunked. HDF5 is not able to apply compression to continuous datasets.</p><p>If chunkSize is not explicitly specified, dataPipe will determine an appropriate chunk size. However, you can optimize the performance of the compression by manually specifying the chunk size using <i>chunkSize</i> argument.</p><p>We can demonstrate the benefit of chunking by exploring the following scenario. The following code utilizes DataPipe’s default chunk size:</p><preclass="codeinput">fData = randi(250, 100, 1000); <spanclass="comment">% Create fake data</span>
84
84
85
85
<spanclass="comment">% create an nwb structure with required fields</span>
</pre><p>This change results in the operation completing in 0.7 seconds and resulting file size of 1.1MB. The chunk size was chosen such that it spans each individual row of the matrix.</p><p>Use the combination of arugments that fit your need. When dealing with large datasets, you may want to use iterative write to ensure that you stay within the bounds of your system memory and use chunking and compression to optimize storage, read and write of the data.</p><h2id="9">Iterative Writing</h2><p>If experimental data is close to, or exceeds the available system memory, performance issues may arise. To combat this effect of large data, <tt>DataPipe</tt> can utilize iterative writing, where only a portion of the data is first compressed and saved, and then additional portions are appended.</p><p>To demonstrate, we can create a nwb file with a compressed time series data:</p><preclass="codeinput">dataPart1 = randi(250, 10000, 1); <spanclass="comment">% "load" 1/4 of the entire dataset</span>
105
-
fullDataSize = [40000 1]; <spanclass="comment">% this is the size of the TOTAL dataset</span>
106
+
</pre><p>This change results in the operation completing in 0.7 seconds and resulting file size of 1.1MB. The chunk size was chosen such that it spans each individual row of the matrix.</p><p>Use the combination of arugments that fit your need. When dealing with large datasets, you may want to use iterative write to ensure that you stay within the bounds of your system memory and use chunking and compression to optimize storage, read and write of the data.</p><h2id="9">Iterative Writing</h2><p>If experimental data is close to, or exceeds the available system memory, performance issues may arise. To combat this effect of large data, <tt>DataPipe</tt> can utilize iterative writing, where only a portion of the data is first compressed and saved, and then additional portions are appended.</p><p>To demonstrate, we can create a nwb file with a compressed time series data:</p><preclass="codeinput">dataPart1 = randi(250, 1, 1000); <spanclass="comment">% "load" 1/4 of the entire dataset</span>
107
+
fullDataSize = [1 40000]; <spanclass="comment">% this is the size of the TOTAL dataset</span>
106
108
107
109
<spanclass="comment">% create an nwb structure with required fields</span>
</pre><p>To append the rest of the data, simply load the NWB file and use the append method:</p><preclass="codeinput">nwb = nwbRead(<spanclass="string">'DataPipeTutorial_iterate.nwb'</span>); <spanclass="comment">%load the nwb file with partial data</span>
131
+
</pre><p>To append the rest of the data, simply load the NWB file and use the append method:</p><preclass="codeinput">nwb = nwbRead(<spanclass="string">'DataPipeTutorial_iterate.nwb'</span>, <spanclass="string">'ignorecache'</span>); <spanclass="comment">%load the nwb file with partial data</span>
128
132
129
133
<spanclass="comment">% "load" each of the remaining 1/4ths of the large dataset</span>
130
134
<spanclass="keyword">for</span> i = 2:4 <spanclass="comment">% iterating through parts of data</span>
131
-
dataPart_i=randi(250, 10000, 1); <spanclass="comment">% faked data chunk as if it was loaded</span>
135
+
dataPart_i=randi(250, 1, 10000); <spanclass="comment">% faked data chunk as if it was loaded</span>
132
136
nwb.acquisition.get(<spanclass="string">'time_series'</span>).data.append(dataPart_i); <spanclass="comment">% append the loaded data</span>
133
137
<spanclass="keyword">end</span>
134
-
</pre><p>The axis property defines the dimension in which additional data will be appended. In the above example, the resulting dataset will be 4000x1. However, if we set axis to 2 (and change fullDataSize appropriately), then the resulting dataset will be 1000x4.</p><h2id="13">Timeseries example</h2><p>Following is an example of how to compress and add a timeseries to an NWB file:</p><preclass="codeinput">fData=randi(250, 10000, 1); <spanclass="comment">% create fake data;</span>
138
+
</pre><p>The axis property defines the dimension in which additional data will be appended. In the above example, the resulting dataset will be 4000x1. However, if we set axis to 2 (and change fullDataSize appropriately), then the resulting dataset will be 1000x4.</p><h2id="13">Timeseries example</h2><p>Following is an example of how to compress and add a timeseries to an NWB file:</p><preclass="codeinput">fData=randi(250, 1, 10000); <spanclass="comment">% create fake data;</span>
135
139
136
140
<spanclass="comment">%assign data without compression</span>
137
141
nwb=NwbFile(<spanclass="keyword">...</span>
@@ -154,14 +158,16 @@
154
158
<spanclass="comment">% Assign the data to appropriate module and write the NWB file</span>
0 commit comments