-
-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support reading records via stdin #22
Comments
Sorry Johannes, I'm a little confused what you like to do 😿 , could you please give more specific example? |
I would like to feed groups of records to the commands via standard input, not via command line parameters. |
Here's an simple example. But there's length limit to pass them to stdin by
|
The difference in your example is, that echo does not read via standard input. Specific example, yea :) ... Consider downloading 100 million gene sequences via accession, you want to spawn say 6 downloaders and give them blocks of 10k accessions to download each and to spit them out on the standard output. Here, one command gets 10k records, trying to provide that as command line parameter will likely not work (if it does, add zeroes until it doesn't). Smaller blocks will hammer the server. |
I see. rush can't do that. But using Anyway, you can split the records in multiple blocks and feed them to commands as you said. |
Workarounds are possible, I guess this is a convenience feature request. It is just very convenient (and very useful with large data) to feed information via a pipe and not via command line options. There are many examples for using '--pipe' in GNU parallel. |
Oh, I have the problem too. |
I would expect this commands will send 100 lines to each instance, but it doesn't:
Parallel equivalent:
|
Hi, great tool! I like all of your *kit programs.
One thing I'm using a lot in GNU parallel is the --pipe option, where the records are divided in blocks and provided to the commands via stdin. This is very useful if single commands work on a large number of records and stdin is better then command line arguments with size restrictions. rush can use an explicit number of records, which I sometimes prefer and which GNU parallel cannot do, because the blocksize is defined by (approximate) data size for performance reasons.
Is there any chance this feature makes it into rush (I coudn't find it)?
I'm aware that this kind of circumvents the whole custom field and parameter assignment part, but maybe you can fit it smoothly by using a BASH-like named pipe syntax to turn records and fields into virtual files using fifos. For instance
could provide the second field of records.txt as a file. The syntax should, of course, not clash with common Shell syntax. This example was just for illustration purposes.
Best,
Johannes
The text was updated successfully, but these errors were encountered: