elPrep: a high-performance tool for analyzing sequence alignment/map files in sequencing pipelines.

Overview

elPrep is a high-performance tool for analyzing .sam/.bam files (up to and including variant calling) in sequencing pipelines. The key advantage of elPrep is that it only performs a single-pass to process a .sam/.bam file, independent of the number of processing steps that need to be applied in a particular pipeline, greatly improving runtime performance.

elPrep is designed as an in-memory and multi-threaded application to fully take advantage of the processing power available with modern servers. Its software architecture is based on functional programming techniques, which allows easy composition of multiple alignment filters and optimizations such as loop merging. To make this possible, elPrep proposes a couple of new algorithms. For example, for duplicate marking we devised an algorithm that expresses the operation as a single-pass filter using memoization techniques and hierarchical hash tables for efficient parallel synchronisation. For base quality score recalibration (BQSR) we designed a parallel range-reduce algorithm. For variant calling, we designed a parallel algorithm that overlaps execution as much as possible with other phases in the pipeline.

Our benchmarks show that elPrep executes a 5-step variant calling best practices pipeline (sorting, duplicate marking, base quality score recalibration and application, and variant calling) between 6-10 times faster than other tools for whole-exome data, and 8-20x faster for whole-genome data.

The main advantage of elPrep is very fast execution times on high-end servers, as is available for example through cloud computing or custom server setups. We do not recommend using elPrep on laptops, desktops, or low-end servers. Please consult the system requirements below for more details.

elPrep is being developed at the ExaScience Life Lab at Imec. For questions, use our mailing list (below), our github page, or contact us via [email protected].

Fig. 1 Improvements with elprep 5 wrt runtime, RAM, and disk use for a variant calling best practices pipeline on a 50x Platinum NA12878 WGS aligned against hg38. elPrep combines the execution of the 5 pipeline steps for efficient parallel execution.

NA12878 Platinum Genome run

For more benchmark details, please consult our publication list.

Advantages

The advantages of elPrep include:

  • efficient multi-threaded execution
  • operates mainly in-memory, few intermediate files are generated
  • 100% equivalent output to results produced by other widely used tools
  • compatible with existing tools
  • modular, easy to add and remove pipeline steps

Availability

elPrep is released and distributed under a dual-licensing scheme. It is released as an open-source project under the terms of the GNU Affero General Public License version 3 as published by the Free Software Foundation, with Additional Terms. Please see the file LICENSE.txt for a copy of the GNU Affero Public License and the Additional Terms. For inquiries about the premium licensing option contact us via [email protected].

We also provide a download of a precompiled binary.

Binaries

elPrep 5 binaries can be downloaded from this website:

GitHub

The elPrep source code is freely available on GitHub. elPrep is implemented in Go and tested for Linux.

elPrep GitHub URL:

Dependencies

elPrep works with the .sam, .bam, and .vcf formats as input/output. Previously, there was a dependency on samtools to read and write .bam files, but since elPrep4.0, .bam files are directly supported by elPrep, with no need for samtools to be present anymore. If you need support for .cram files, consider converting them to/from .bam files before/after elPrep using samtools, or other alternatives. There was previously also a dependency on bcftools to read and write .vcf.gz and .bcf files, but since elPrep5.0, .vcf.gz are directly supported by elPrep as well. If you need support for .bcf files, consider converting them to/from .vcf.gz files before/after elPrep using bcftools, or other alternatives.

elPrep relies on its own .elsites file format for representing known sites during base quality score recalibration. Such .elsites files can be generated from .vcf files using the elPrep vcf-to-elsites command, and from .bed files using bed-to-elsites. elPrep also uses its ows .elfasta file format for representing references during base quality score recalibration and variant calling. They can be generated from .fasta files using the elPrep fasta-to-elfasta command.

There are no dependencies on other tools.

Building

The following is only relevant information if you wish to build elPrep yourself. It is not necessary to use the elPrep binary we provide.

elPrep (since version 3.0) is implemented in Go. Please make sure you have a working installation of Go. You can either install Go from the Go website. Alternatively, most package managers provide options to install Go as a development tool. Check the documentation of the package manager of your Linux distribution for details.

First checkout the elPrep sources using the following command:

	go get -u github.com/exascience/elprep

This downloads the elPrep Go source code, and creates the elprep binary in your configured Go home folder, for example ~/go/bin/elprep. See the GOPATH variable for your Go home folder.

Add the binary to your path, for example:

      export PATH=$PATH:~/go/bin

Compatibility

elPrep has been developed for Linux and has not been tested for other operating systems. We have tested elPrep with the following Linux distributions:

  • Ubuntu 14.04.5 LTS
  • Manjaro Linux
  • Red Hat Enterprise Linux 6.4 and 6.5
  • Amazon Linux 2

Memory Requirements

RAM

elPrep is designed to operate in memory, i.e. data is stored in RAM during computation. As long as you do not use the in-memory sort, mark duplicates filter, base recalibration, or variant calling, elPrep operates as a streaming tool, and peak memory use is limited to a few GB.

elPrep also provides a tool for splitting .sam files per chromosome - or better per groups of chromosomes - and guarantees that processing these split files and then merging the results does not lose information when compared to processing a .sam file as a whole. Using the split/merge tool greatly reduces the RAM required to process a .sam file, but it comes at the cost of an additional processing step.

We recommend the following minimum of RAM when executing memory-intensive operations such as sorting, duplicate marking, base quality score recalibration and haplotype caller:

  • whole-genome 30x: 128 GB RAM using the elprep split/filter/merge mode (sfm)
  • whole-genome 50x: 200 GB RAM using the elprep split/filter/merge mode (sfm)
  • whole-exome 30x: 24GB RAM using the elprep split/filter/merge mode (sfm)
  • whole-exome 30x: 92GB RAM using the elprep in memory mode (filter)

These numbers are only estimates, and the actual RAM use may look different for your data sets.

Disk Space

elPrep by default does not write any intermediate files, and therefore does not require additional (peak) disk space beyond what is needed for storing the input and output files. If you use the elPrep split and merge tools, elPrep requires additional disk space equal to the size of the input file.

Mailing List and Contact

Use the Google forum for discussions. You need a Google account to subscribe through the forum URL. You can also subscribe without a Google account by sending an email to [email protected].

You can also contact us via [email protected] directly.

For inquiries about commercial licensing options contact us via [email protected].

Citing elPrep

Please cite the following articles:

Herzeel C, Costanza P, Decap D, Fostier J, Verachtert W (2019) elPrep 4: A multithreaded framework for sequence analysis. PLoS ONE 14(2): e0209523. https://doi.org/10.1371/journal.pone.0209523

Herzeel C, Costanza P, Decap D, Fostier J, Reumers J (2015) elPrep: High-Performance Preparation of Sequence Alignment/Map Files for Variant Calling. PLoS ONE 10(7): e0132868. https://doi.org/10.1371/journal.pone.0132868

Costanza P, Herzeel C, Verachter W (2019) A comparison of three programming languages for a full-fledged next-generation sequencing tool. BMC Bioinformatics 2019 20:301. https://doi.org/10.1186/s12859-019-2903-5

If performance is below your expectations, please contact us first before reporting your results.

Examples

Variant calling pipeline for whole-genome data (WGS)

The following elprep command shows a 5-step variant calling best practices pipeline on WGS data:

elprep sfm NA12878.input.bam NA12878.output.bam
           --mark-duplicates --mark-optical-duplicates NA12878.output.metrics
           --sorting-order coordinate
           --bqsr NA12878.output.recal --known-sites dbsnp_138.hg38.elsites,Mills_and_1000G_gold_standard.indels.hg38.elsites
           --reference hg38.elfasta
           --haplotypecaller NA128787.output.vcf.gz

The command executes a pipeline that consists of 5 steps: sorting, PCR and optical duplicate marking, base quality score recalibration and application, and variant calling.

We can break up the command as follows:

  • The sfm subcommand tells elprep to run in sfm (split/filter/merge) mode. This is generally the preferred mode for WGS data, unless the data has very low coverage (<= 10x).

  • The input file is "NA12878.input.bam".

  • Output is written to a file "NA12878.bam" that contains the result of modifying the input bam file by performing duplicate marking, sorting, and base quality score recalibration and application.

  • The flags --mark-duplicates and --mark-optical-duplicates instruct elprep to perform PCR and optical duplicate marking respectively. The statistics generated by this are written to a file "NA12878.output.metrics".

  • The flag --sorting-order tells elprep to sort the input bam file by coordinate order.

  • The flag --bqsr instructs elprep to perform base quality score recalibration. The statistics generated by this are written to a file "NA12878.output.recal". The --bqsr flags also need to know the reference fasta file with which the input bam was created, cf. the "--reference hg38.elfasta" option. Note the file extension ".elfasta". elPrep requires converting the fasta file to this format before running the pipeline via the command "elprep fasta-to-elfasta hg38.fasta hg38.elfasta". The --bqsr option also needs to know the known variant sites, passed via the "--known-sites dbsnp_138.hg38.elsites" option. Note the file extension ".elsites". elPrep requires converting vcf files to this format before running the pipeline via the command "elprep vcf-to-elsites dbsnp_138.hg38.vcf dbsnp_138.hg38.elsites".

  • The flag --haplotypecaller instructs elprep to perform variant calling. It uses the same reference fasta as the one passed for --bqsr (via --reference). The result of this step is written are written to a file "NA12878.output.vcf.gz".

For details, consult the manual reference pages.

Variant calling pipeline for whole-exome data (WES)

The following elprep command shows a 5-step variant calling best practices pipeline on WES data:

elprep sfm NA12878.input.bam  NA12878.output.bam 
           --mark-duplicates --mark-optical-duplicates NA12878.output.metrics 
           --sorting-order coordinate 
           --bqsr NA12878.output.recal --known-sites dbsnp_137.hg19.elsites,Mills_and_1000G_gold_standard.indels.hg19.elsites 
           --reference hg19.elfasta 
           --haplotypecaller NA12878.output.vcf.gz 
           --target-regions nexterarapidcapture_expandedexome_targetedregions.bed 

elPrep uses an internal ".elfasta" format for representing fasta files, which can be created using the "elprep fasta-to-eflasta" command before running the pipeline. Similarly, elPrep uses an internal format for representing vcf files containing known variant sites (.elsites), which can be created using the command "elprep vcf-to-elsites".

For details, consult the manual reference pages.

Manual Reference Pages

Name

elprep filter - a commandline tool for filtering and updating .sam/.bam files and variant calling

Synopsis

elprep filter input.sam output.sam --mark-duplicates --mark-optical-duplicates output.metrics
                                   --sorting-order coordinate 
                                   --bqsr output.recal --reference hg38.elfasta --known-sites dbsnp_138.hg38.elsites 
                                   --haplotypecaller output.vcf.gz

elprep filter input.bam output.bam --mark-duplicates --mark-optical-duplicates output.metrics 
                                   --sorting-order coordinate 
                                   --bqsr output.recal --reference hg38.elfasta --known-sites dbsnp_138.hg38.elsites 
                                   --haplotypecaller output.vcf.gz

elprep filter /dev/stdin /dev/stdout --mark-duplicates --mark-optical-duplicates output.metrics 
                                     --sorting-order coordinate 
                                     --bqsr output.recal --reference hg38.elfasta --known-sites dbsnp_138.hg38.elsites	
                                     --haplotypecaller output.vcf.gz

Description

The elprep filter command requires two arguments: the input file and the output file. The input/output format can be .sam or .bam. elPrep determines the format of the input by analyzing the actual contents of the input file. The format of the output file is determined by looking at the file extension. elPrep also allows to use /dev/stdin and /dev/stdout as respective input or output sources for using Unix pipes. When doing so, elPrep assumes output is in .sam format, which can be changed by additional parameters (see below).

The elprep filter command-line tool has three types of command options: filters, which implement actual .sam/.bam manipulations, sorting options, and execution-related options, for example for setting the number of threads. For optimal performance, issue a single elprep filter call that combines all filters you wish to apply.

The order in which command options are passed is ignored. For optimal performance, elPrep always applies filters/operations in the following order:

  1. filter-unmapped-reads or filter-unmapped-reads-strict
  2. filter-mapping-quality
  3. filter-non-exact-mapping-reads or filter-non-exact-mapping-reads-strict
  4. filter-non-overlapping-reads
  5. clean-sam
  6. replace-reference-sequences
  7. replace-read-group
  8. mark-duplicates
  9. mark-optical-duplicates
  10. bqsr
  11. remove-duplicates
  12. remove-optional-fields
  13. keep-optional-fields
  14. haplotypecaller

Sorting is done after filtering.

Please also see the elprep sfm command.

Unix pipes

elPrep is compatible with Unix pipes and allows using /dev/stdin and /dev/stdout as input or output sources. elPrep analyzes the input from /dev/stdin to determine if it is in .sam or .bam format, and assumes that output to /dev/stdout is in .sam format, unless specified otherwise (see below).

Filter Command Options

--replace-reference-sequences file

This filter is used for replacing the header of a .sam/.bam file by a new header. The new header is passed as a single argument following the command option. The format of the new header can either be a .dict file or another .sam/.bam file from which you wish to extract the new header.

All alignments in the input file that do not map to a chromosome that is present in the new header are removed. Therefore, there should be some overlap between the old and the new header for this command option to be meaningful. The option is typically used to reorder the reference sequence dictionary in the header.

Replacing the header of a .sam/.bam file may destroy the sorting order of the file. In this case, the sorting order in the header is set to "unknown" by elPrep in the output file (cf. the 'so' tag).

--filter-unmapped-reads

Removes all alignments in the input file that are unmapped. An alignment is determined unmapped when bit 0x4 of its FLAG is set, conforming to the SAM specification.

--filter-unmapped-reads-strict

Removes all alignments in the input file that are unmapped. An alignment is determined unmapped when bit 0x4 of its FLAG is set, conforming to the SAM specification. Also removes alignments where the mapping position (POS) is 0 or where the reference sequence name (RNAME) is *. Such alignments are considered unmapped by the SAM specification, but some alignment programs may not mark the FLAG of those alignments as unmapped.

--filter-mapping-quality mapping-quality

Remove all alignments with mapping quality lower than the given mapping quality.

--filter-non-exact-mapping-reads

Removes all alignments where the mapping is not an exact match with the reference, albeit soft-clipping is allowed. This filter checks the CIGAR string and only allows occurences of M and S.

--filter-non-exact-mapping-reads-strict

Removes all alignments where the mapping is not an exact match with reference or not a unique match. This filter checks for each read that the following optional fields are present with the following values: X0=1 (unique mapping), X1=0 (no suboptimal hit), XM=0 (no mismatch), XO=0 (no gap opening), XG=0 (no gap extension).

--filter-non-overlapping-reads bed-file

Removes all reads where the mapping positions do not overlap with any region specified in the bed file. Specifically, either the start or end of the read's mapping position must be contained in an interval, or the read is removed from the output.

This option produces a different result from --target-regions option. For the difference between both options and details on the algorithms, please consult our latest publication.

--replace-read-group read-group-string

This filter replaces or adds read groups to the alignments in the input file. This command option takes a single argument, a string of the form "ID:group1 LB:lib1 PL:illumina PU:unit1 SM:sample1" where the names following ID:, PL:, PU:, etc. can be any user-chosen name conforming to the SAM specification. See SAM Format Specification Section 1.3 for details: The string passed here can be any string conforming to a header line for tag @RG, omitting the tag @RG itself, and using whitespace as separators for the line instead of TABs.

--mark-duplicates

This filter marks the duplicate reads in the input file by setting bit 0x400 of their FLAG conforming to the SAM specification. For details on the algorithm and comparison to other tools, please consult our publication list.

--mark-optical-duplicates file

When the --mark-duplicates filter is passed, one can also pass --mark-optical-duplicates. This option makes sure that optical duplicate marking is performed and a metrics file is generated that contains read statistics such as number of unmapped reads, secondary reads, duplicate reads, optical duplicates, library size estimate, etc. For details on the algorithm and comparison to other tools, please consult our publication list.

The metrics file generated by --mark-optical-duplicates is compatible with MultiQC for visualisation.

--optical-duplicates-pixel-distance nr

This option allows specifying the pixel distance that is used for optical duplicate marking. This option is only usable in conjunction with --mark-optical-duplicates. The default value for the pixel distance is 100. In general, a pixel distance of 100 is recommended for data generated using unpatterned flowcells (e.g. HiSeq2500) and a pixel distance of 2500 is recommended for patterned flowcells (e.g. NovaSeq/HiSeq4000).

--remove-duplicates

This filter removes all reads marked as duplicates. Duplicate reads are reads where their FLAG's bit 0x400 is set conforming the SAM specification.

--remove-optional-fields [all | list]

This filter removes for each alignment either all optional fields or all optional fields specified in the given list. The list of optional fields to remove has to be of the form "tag1, tag2, ..." where tag1, tag2, etc are the tags of the optional fields that need to be deleted.

--keep-optional-fields [none | list]

This filter removes for each alignment either none of its optional fields, or all optional fields except those specified in the given list. The list of optional fields to keep has to be of the form "tag1, tag2, ..." where tag1, tag2, etc are the tags of the optional fields that need to be kept in the output.

--clean-sam

This filter fixes alignments in two ways:

  • it soft clips alignments that hang off the end of its reference sequence
  • it sets the mapping quality to 0 if the alignment is unmapped

This filter is similar to the CleanSam command of Picard.

--bqsr recal-file

This filter performs base quality score recalibration. The recal-file is used for logging the recalibration tables computed during base recalibration. This file is compatible with MultiQC for visualisation.

There are additional elprep options that can be used for configuring the base quality score recalibration:

  • --reference elfasta (required)
  • --known-sites list (optional)
  • --quantize-levels nr (optional)
  • --sqq list (optional)
  • --max-cycle nr (optional)
  • --target-regions bed-file (optional)

See detailed descriptions of these options next.

--reference elfasta-file

This option is used to pass a reference file for base quality score recalibration (--bqsr). The reference file must be in the .elfasta format, specific to elPrep.

You can create an .elfasta file from a .fasta file using the elprep command fasta-to-elfasta. For example:

elprep fasta-to-elfasta ucsc.hg19.fasta ucsc.hg19.elfasta

You only need to pass this option once if you are using both the --bqsr and --haplotypecaller options (which both require passing a reference file).

--known-sites list

This option is used to pass a number of known polymorphic sites that will be excluded during base recalibration (--bqsr) . The list is a list of files in the .elsites format, specific to elPrep. For example:

--known-sites Mills_and_1000G_gold_standard.indels.hg19.elsites,dbsnp_137.hg19.elsites

You can create .elsites files from .vcf or .bed files using the vcf-to-elsites and bed-to-elsites parameters respectively. For example:

elprep vcf-to-elsites dbsnp_137.hg19.vcf dbsnp_137.hg19.elsites

--quantize-levels nr

This option is used to specify the number of levels for quantizing quality scores during base quality score recalibration (--bqsr). The default value is 0.

--sqq list

This option is used to indicate to use static quantized quality scores to a given number of levels during base quality score recalibration (--bqsr). This list should be of the form "[nr, nr, nr]". The default value is [].

--max-cycle nr

This option is used to specify the maximum cycle value during base quality score recalibration (--bqsr). The default value is 500.

--target-regions bed-file

This option can be used to restrict which reads the base recalibration operates on by passing a .bed file that lists which genomic regions to consider. When this option is used, the reads that fall out of the specified regions are removed from the output .bam file. The option is for example used when processing exomes.

This option produces a different result from --filter-non-overlapping-reads option. For the difference between both options and details on the algorithms, please consult our latest publication.

--bqsr-tablename-prefix prefix

This option can be used to determine the prefix for the table names when logging the recalibration tables. The default value ensures that the output is compatible with MultiQC. It is normally not necessary to set this option.

--mark-optical-duplicates-intermediate file

This option is used in the context of filtering files created using the elprep split command. It is used internally by the elprep sfm command, but can be used when writing your own split/filter/merge scripts.

This option tells elPrep to perform optical duplicate marking and to write the result to an intermediate metrics file. The intermediate metrics file generated this way can later be merged with other intermediate metrics files, see the merge-optical-duplicates-metrics command.

--bqsr-tables-only table-file

This option is used in the context of filtering files created using the elprep split command. It is used internally by the elprep sfm command, but can be used when writing your own split/filter/merge scripts.

This option tells elPrep to perform base quality score recalibration and to write the result of the recalibration to an intermediate table file. This table file will need to be merged with other intermediate recalibration results during the application of the base quality score recalibration. See the --bqsr-apply option.

--bqsr-apply path

This option is used when filtering files created by the elprep split command. It is used internally by the elprep sfm command, and can be used when writing your own split/filter/merge scripts.

This option is used for applying base quality score recalibration on an input file. It expects a path parameter that refers to a directory that contains intermediate recalibration results for multiple files created using the --bqsr-tables-only option.

--haplotypecaller vcf-file

This option performs variant calling for detecting germline SNPS and indels. The vcf-file is used for storing the vcf output. This file can be in gzipped format.

There are additional elprep options that can be used for configuring the haplotype variant caller:

  • --reference elfasta (required)
  • --reference-confidence [GVCF | BP_RESOLUTION | NONE] (optional)
  • --sample-name name (optional)
  • --activity-profile igv-file (optional)
  • --assembly-regions igv-file (optional)
  • --assembly-region-padding nr (optional)
  • --target-regions (optional)

See detailed descriptions of these options next.

--reference elfasta

This option is used to pass a reference file for variant calling (--haplotypecaller). The reference file must be in the .elfasta format, specific to elPrep.

You can create an .elfasta file from a .fasta file using the elprep command fasta-to-elfasta. For example:

elprep fasta-to-elfasta ucsc.hg19.fasta ucsc.hg19.elfasta

You only need to pass this option once if you are using both the --bqsr and --haplotypecaller options (which both require passing a reference file).

--reference-confidence [GVCF | BP_RESOLUTION | NONE]

This option is used to set the mode for emitting reference confidence scores when performing variant calling (--haplotypecaller). There are three options to choose from:

  • GVCF (default): emit the GVCF format, i.e. the reference model is written with condensed non-variant blocks
  • BP_RESOLUTION: the reference model is emitted site by site
  • NONE: reference confidence calls are not emitted

--sample-name name

The elPrep haplotypecaller (--haplotypecaller) only works for single samples. In case the input .bam file contains multiple samples, the --sample-name option can be used to select the sample reads on which to operate on.

--activity-profile igv-file

Use this option to output the activity profile calculated by the haplotypecaller to the given file in IGV format.

--activity-regions igv-file

This option can be used to output the assembly regions calculated by haplotypecaller to the speficied file in IGV format .

--assembly-region-padding nr

This option specfies the number of additional bases to include around each assembly region for variant calling.

--target-regions bed-file

By default, the haplotypecaller scans the full genome for variants. Use this option to restrict the variant caller to specific regions by passing a .bed file. It is for example used when processing exomes.

You only need to pass this option once if you are using both the --bqsr and --haplotypcaller options.

Sorting Command Options

--sorting-order [keep | unknown | unsorted | queryname | coordinate]

This command option determines the order of the alignments in the output file. The command option must be followed by one of five possible orders:

  1. keep: The original order of the input file is preserved in the output file. This is the default setting when the --sorting-order option is not passed. Some filters may change the order of the input, in which case elPrep forces a sort to recover the order of the input file.
  2. unknown: The order of the alignments in the output file is undetermined, elPrep performs no sorting of any form. The order in the header of the output file will be unknown.
  3. unsorted: The alignments in the output file are unsorted, elPrep performs no sorting of any form. The order in the header of the output file will be unsorted.
  4. queryname: The output file is sorted according to the query name. The sort is enforced and guaranteed to be executed. If the original input file is already sorted by query name and you wish to avoid a sort with elPrep, use the keep option instead.
  5. coordinate: The output file is sorted according to coordinate order. The sort is enforced and guaranteed to be executed. If the original input file is already sorted by coordinate order and you wish to avoid a sort with elPrep, use the keep option instead.

Execution Command Options

--nr-of-threads number

This command option sets the number of threads that elPrep uses during execution. The default number of threads is equal to the number of cpu threads.

It is normally not necessary to set this option. elPrep by default allocates the optimal number of threads.

--timed

This command option is used to time the different phases of the execution of the elprep command, e.g. time spent on reading from file into memory, filtering, sorting, etc.

It is normally not necessary to set this option. It is only useful to get some details on where execution time is spent.

--log-path path

This command option is used to specify a path where elPrep can store log files. The default path is the logs folder in your home path (~/logs).

Format conversion tools

elPrep uses internal formats for representing .vcf, .bed, or .fasta files used by specific filter/sfm options. elPrep provides commands for creating these files from existing .vcf, .bed or .fasta files.

Name

elprep vcf-to-elsites - a commandline tool for converting a .vcf file to an .elsites file

Synposis

elprep vcf-to-elsites input.vcf output.elsites --log-path /home/user/logs

Description

Converts a .vcf file to an .elsites file. Such a file can be passed to the --known-sites suboption of the --bqsr option.

Options

--log-path path

Sets the path for writing a log file.

Name

elprep bed-to-elsites - a commandline tool for converting a .bed file to an .elsites file

Synposis

elprep bed-to-elsites input.bed output.elsites --log-path /home/user/logs

Description

Converts a .bed file to an .elsites file. Such a file can be passed to the --known-sites suboption of the --bqsr option.

Options

--log-path path

Sets the path for writing a log file.

Name

elprep fasta-to-elfasta - a commandline tool for converting a .fasta file to an .elfasta file

Synopsis

elprep fasta-to-elfasta input.fasta output.elfasta --log-path /home/user/logs

Description

Converts a .fasta file to an .elfasta file. The --reference suboption of the --bqsr and --haplotypecaller options requires an .elfasta file.

Options

--log-path path

Sets the path for writing a log file.

Split and Merge tools

The elprep split command can be used to split up .sam files into smaller files that store the reads "per chromosome," or more precisely groups of chromosomes. elPrep determines the chromosomes by analyzing the sequence dictionary in the header of the input file and generates a split file for groups of chromosomes that are roughly equal in size and that stores all read pairs that map to that group of chromosomes. elPrep additionally creates a file for storing the unmapped reads, and in the case of paired-end data, also a file for storing the pairs where reads map to different chromosomes. elPrep also duplicates the latter pairs across chromosome files so that preparation pipelines have access to all information they need to run correctly. Once processed, use the elprep merge command to merge the split files back into a single .sam file.

Splitting the .sam file into smaller files for processing "per chromosome" is useful for reducing the memory pressure as these split files are typically significantly smaller than the input file as a whole. Splitting also makes it possible to parallelize the processing of a single .sam file by distributing the different split files across different processing nodes.

We provide an sfm command that executes a pipeline while silently using the elprep filter and split/merge tools. It is of course possible to write scripts to combine the filter and split/merge tools yourself. We provide a recipe for writing your own split/filter/merge scripts on our github wiki.

Name

elprep sfm - a commandline tool for filtering and updating .sam/.bam files and variant calling "per chromosome"

Synopsis

elprep sfm input.sam output.sam 
           --mark-duplicates --mark-optical-duplicates output.metrics 
           --sorting-order coordinate 
           --bqsr output.recal --reference hg38.elfasta --known-sites dbsnp_138.hg38.elsites 
           --haplotypecaller output.vcf.gz

elprep sfm input.bam output.bam 
           --mark-duplicates --mark-optical-duplicates output.metrics 
           --sorting-order coordinate 
           --bqsr output.recal --reference hg38.elfasta --known-sites dbsnp_138.hg38.elsites 
           --haplotypecaller output.vcf.gz
           
elprep sfm input.bam output.bam 
           --mark-duplicates --mark-optical-duplicates output.metrics 
           --sorting-order coordinate 
           --bqsr output.recal --reference hg38.elfasta --known-sites dbsnp_138.hg38.elsites 
           --haplotypecaller output.vcf.gz

Description

The elprep sfm command is a drop-in replacement for the elprep filter command that minimises the use of RAM. For this, it silently calls the elprep split and merge tools to split up the data "per chromosome" for processing, which requires less RAM than processing a .sam/.bam file as a whole (see Split and Merge tools).

Options

The elprep sfm command has the same options as the elprep filter command, with the following additions.

--intermediate-files-output-type [sam | bam]

This command option sets the format of the split files. By default, elprep uses the same format as the input file for the split files. Changing the intermediate file output type may improve either runtime (.sam) or reduce peak disk usage (.bam).

--tmp-path

This command option is used to specify a path where elPrep can store temporary files that are created (and deleted) by the split and merge commands that are silently called by the elprep sfm command. The default path is the folder from where you call elprep sfm.

--single-end

Use this command option to indicate the sfm command is processing single-end data. This information is important for the split/merge tools to operate correcly. For more details, see the description of the elprep split and elprep merge commands.

--contig-group-size number

This command option is passed to the split tool.

The elprep split command groups the sequence dictionary entries for deciding how to split up the input data. The goal is to end up with groups of sequence dictionary entries (contigs) for which the total length (sum of LN tags) is roughly the same among all groups. By default, the elprep split command identifies the sequence dictionary entry with the largest length (LN) and chooses this as a target size for the groups.

The --contig-group-size option allows configuring a specific group size. This size may be smaller than some of the sequence dictionary entries: elprep split will attempt to create as many groups of contigs of the chosen size, and contigs which are "too long" will be put in their own group.

Configuring the contig group size has an impact on how large the split files are that are generated by the elprep split command. Consequently, this also impacts how much RAM elprep uses for processing the split files. The default group size determines the minimum amount of RAM that is necessary to process a .sam/.bam file without information loss.

The default value for the --contig-group-size option is 0. For this, elprep split makes groups based on the sequence dictionary entry with the biggest length (LN).

Choosing the value 1 for the --contig-group-size tells elprep split to split the data "per chromosome", i.e. a split file is created for each entry in the sequence dictionary.

Name

elprep split - a commandline tool for splitting .sam/.bam files per chromosome so they can be processed without information loss

Synopsis

elprep split [sam-file | /path/to/input/] /path/to/output/ --output-prefix "split-sam-file" --output-type sam 
--nr-of-threads $threads --single-end

Description

The elprep split command requires two arguments: 1) the input file or a path to multiple input files and 2) a path to a directory where elPrep can store the split files. The input file(s) can be .sam or .bam. It is also possible to use /dev/stdin as the input for using Unix pipes. There are no structural requirements on the input file(s) for using elprep split. For example, it is not necessary to sort the input file, nor is it necessary to convert to .bam or index the input file.

Warning: If you pass a path to multiple input files to the elprep split command, elprep assumes that they all have the same (or compatible) headers, and just picks the first header it finds as the header for all input files. elprep currently does not make an attempt to resolve potential conflicts between headers, especially with regard to the @SQ, @RG, or @PG header records. We will include proper merging of different SAM/BAM files in a future version of elprep. In the meantime, if you need proper merging of SAM/BAM files, please use samtools merge, Picard MergeSamFiles, or a similar tool. (If such a tool produces SAM file as output, it can be piped into elprep using Unix pipes.)

elPrep creates the output directory denoted by the output path, unless the directory already exists, in which case elPrep may override the existing files in that directory. Please make sure elPrep has the correct permissions for writing that directory.

By default, the elprep split command assumes it is processing pair-end data. The flag --single-end can be used for processing single-end data. The output will look different for paired-end and single-end data.

Paired-end data (default)

The split command outputs two types of files:

  1. a subdirectory "/path/to/output/splits/". The split command groups the entries in the sequence dictionary of the input file and creates a file for each of these groups containing all reads that map to that group.
  2. a "/path/to/output/output-prefix-spread.output-type" file containing all reads of which the mate maps to a different entry in the sequence dictionary of the input file.

To process the files created by the elprep split command, one needs to call the elprep filter command for each entry in the path/to/output/splits/ directory as well as the /path/to/output/output-prefix-spread.output-type file. The output files produced this way need to be merged with the elprep merge command. This is implemented by the elprep sfm command.

Single-end data (--single-end)

The split command groups entries in the sequence dictionary of the input file and creates a file for each of these groups that contain all reads that map to that group, and writes those files to the /path/to/output/ directory.

To process the files created by the elprep split --single-end command, one needs to call the elprep filter command for each entry in the /path/to/output/ directory. The output files produces this way need to be merged with the elprep merge command. This is implemented by the elprep sfm command.

Options

--output-prefix name

The split command groups entries in the sequence dictionary. The purpose of this grouping is to create groups of which the lengths of the entries (LN tags) add up to roughly the same size.

The names of the split files created by elprep split are generated by combing a prefix and a chromosome group name. The --output-prefix option sets that prefix.

For example, if the prefix is "NA12878", and the sfm command creates N groups for the sequence dictionary of the input file, then the names of the split files will be "NA12878-group1.output-type", "NA12878-group2.output-type", "NA12878-group3.output-type", and so on. A seperate file for the unmapped reads is created, e.g. "NA12878-unmapped.output-type".

If the user does not specify the --output-prefix option, the name of the input file, minus the file extension, is used as a prefix.

--output-type [sam | bam]

This command option sets the format of the split files. By default, elprep uses the same format as the input file for the split files.

--nr-of-threads number

This command option sets the number of threads that elPrep uses during execution for parsing/outputting .sam/.bam data. The default number of threads is equal to the number of cpu threads.

It is normally not necessary to set this option. elPrep by default allocates the optimal number of threads.

--single-end

When this command option is set, the elprep split command will treat the data as single-end data. When the option is not used, the elprep split command will treat the data as paired-end data.

--log-path path

Sets the path for writing a log file.

--contig-group-size number

The elprep split command groups the sequence dictionary entries for deciding how to split up the input data. The --contig-group-size options allows configuring a specific group size. See the description of --contig-group-size for the elprep sfm command for more details.

Name

elprep merge - a commandline tool for merging .sam/.bam files created by elprep split

Synopsis

elprep merge /path/to/input/ sam-output-file --nr-of-threads $threads --single-end

Description

The elprep merge command requires two arguments: a path to the files that need to be merged, and an output file. Use this command to merge files created with elprep split. The output file can be .sam or .bam. It is also possible to use /dev/stdout as output when using Unix pipes for connecting other tools.

Options

--nr-of-threads number

This command option sets the number of threads that elPrep uses during execution for parsing/outputting .sam/.bam data. The default number of threads is equal to the number of cpu threads.

It is normally not necessary to set this option. elPrep by default allocates the optimal number of threads.

--single-end

This command option tells the elprep merge command to treat the data as single-end data. When this option is not used, elprep merge assumes the data is paired-end, expecting the data is merging to be generated by the elprep split command accordingly.

--log-path path

Sets the path for writing a log file.

Name

elprep merge-optical-duplicate-metrics - a commandline tool for merging intermediate metrics files created by the --mark-optical-duplicates-intermediate option

Synopsis

elprep merge-optical-duplicates-metrics input-file output-file metrics-file /path/to/intermediate/metrics --remove-duplicates

Description

The elprep merge-optical-duplicates-metrics command requires four arguments: the names of the original input and output .sam/.bam files for which the metrics are calculated, the metrics file to which the merged metrics should be written, and a path to the intermediate metrics files that need to be merged (and were generated using --mark-optical-duplicates-intermediate).

Options

--nr-of-threads number

This command option sets the number of threads that elPrep uses during execution for parsing/outputting .sam/.bam data. The default number of threads is equal to the number of cpu threads.

It is normally not necessary to set this option. elPrep by default allocates the optimal number of threads.

--remove-duplicates

Pass this option if the metrics were generated for a file for which the duplicates were removed. This information will be included in the merged metrics file.

Extending elPrep

If you wish to extend elPrep, for example by adding your own filters, please consult our API documentation.

Acknowledgements

Many thanks for testing, bug reports, or contributions:

Amin Ardeshirdavani

Pierre Bourbon

Benoit Charloteaux

Richard Corbett

Didier Croes

Matthias De Smet

Keith James

Leonor Palmeira

Joke Reumers

Geert Vandeweyer

Comments
  • error while trying to split

    error while trying to split

    Hi. I have a 100X human bam file (about 210G) that I want to mark duplicates inside.

    I have a server with about 800Gb of RAM, but judging by your description it would be unsafe to try marking duplicates without splitting first.

    When I try the splitting command like this: elprep-v2.10/elprep split bam.merged.bam splitFiles --output-prefix bamMerged --output-type sam

    I get the following error: "view: invalid option -- '@' open: No such file or directory [main_samview] fail to open file for reading. view: invalid option -- '@' [main_samview] fail to open file for reading. view: invalid option -- '@' [main_samview] fail to open file for reading. " I have a hunch that there is a problem with version of samtools, but its only a guess. Do you have any ideas?

  • Unclipped position not present in SAM alignment

    Unclipped position not present in SAM alignment

    Hi, I'm getting failed tasks when using --mark-optical-duplicates, and a message 'Unclipped position not present in SAM alignment'. Could this be the cause, or should I look for some other reason that the task is failing?

    Thank you.

  • elprep sfm mode exits with out of Memory error

    elprep sfm mode exits with out of Memory error

    Thanks for the great work with elPrep! It has been really useful in cutting down analysis runtimes!

    We have been running elPrep (4.1.5) on a WGS dataset to primarily use the mark duplicates and bqsr functionalities, with mixed success. A subset of samples work as expected while some are exiting with the following runtime out of memory error. Would greatly appreciate any inputs regarding this problem -

    fatal error: runtime: out of memory
    
    runtime stack:
    runtime.throw(0x5f0611, 0x16)
            /opt/local/lib/go/src/runtime/panic.go:617 +0x72
    runtime.sysMap(0x11748000000, 0x4000000, 0x78c078)
            /opt/local/lib/go/src/runtime/mem_linux.go:170 +0xc7
    runtime.(*mheap).sysAlloc(0x7746c0, 0x2000, 0x7746d0, 0x1)
            /opt/local/lib/go/src/runtime/malloc.go:633 +0x1cd
    runtime.(*mheap).grow(0x7746c0, 0x1, 0x0)
            /opt/local/lib/go/src/runtime/mheap.go:1222 +0x42
    runtime.(*mheap).allocSpanLocked(0x7746c0, 0x1, 0x78c088, 0x7f9d8df1a888)
            /opt/local/lib/go/src/runtime/mheap.go:1150 +0x37f
    runtime.(*mheap).alloc_m(0x7746c0, 0x1, 0x45002f, 0x7f9d8df1a888)
            /opt/local/lib/go/src/runtime/mheap.go:977 +0xc2
    runtime.(*mheap).alloc.func1()
            /opt/local/lib/go/src/runtime/mheap.go:1048 +0x4c
    runtime.systemstack(0x0)
            /opt/local/lib/go/src/runtime/asm_amd64.s:351 +0x66
    runtime.mstart()
            /opt/local/lib/go/src/runtime/proc.go:1153
    

    The targeted coverage for the dataset is 60X , the input BAM is roughly ~114GB. The command line taken from the logs -

    /home/ubuntu/elPrep/elprep sfm INPUTBAM OUTPUTBAM --mark-duplicates --mark-optical-duplicates OUTPUTDUPMETRICS --sorting-order keep --bqsr OUTPUTRECAL --bqsr-reference ucsc.hg19.elfasta --known-sites <knownSiteFiles>

  • sfm sorting problem

    sfm sorting problem

    elprep v 4.0.1 SFM functionality results in incorrectly sorted bam.

    command : /opt/NGS/binaries/elPrep/4.0.1/elprep sfm 'DNA1802266B_recalibrated.bam' /home/shared_data_medgen_frax/Exome_CNV_TestRuns/CNV_tmp_files/dna1802266b/dna1802266b.full.bam --remove-duplicates --filter-mapping-quality '40' --log-path '/home/shared_data_medgen_frax/Exome_CNV_TestRuns/CNV_Job_Output/dna1802266b/' --filter-unmapped-reads --nr-of-threads '16' --sorting-order 'coordinate' --filter-non-overlapping-reads '/opt/NGS/References/hg19/BedFiles/WGS.bed' --replace-reference-sequences '/opt/NGS/References/hg19_masked_par/samtools/0.1.19/hg19.dict'

    results in error when applying samtools index resulting.bam

    [bam_index_core] the alignment is not sorted (ST-K00127:290:HNFYKBBXX:1:1216:29904:8330): 249240352 > 752696 in 2-th chr [bam_index_build2] fail to index the BAM file.

    running elprep filter instead of elprep sfm results in correct sorting, so the problem seems to be related to the SFM merging. I also tried with --contig-group-size 1, instead of default, with similar errors as a result.

  • Type error for stream in parse-sam-header on sbcl 1.2.9 OSX

    Type error for stream in parse-sam-header on sbcl 1.2.9 OSX

    I've got further from #2 by using the method I described. I've read the code and would expect this to work (via Gray streams), but nevertheless:

    elPrep version 2.11. See http://github.com/exascience/elprep for more information.
    Executing command:
      elprep /dev/stdin NA12878-chr22-10pct.only_mapped.bam --filter-unmapped-reads --sorting-order keep --gc-on 0 --nr-of-threads 1
    
    debugger invoked on a TYPE-ERROR in thread
    #<THREAD "main thread" RUNNING {10037AECC3}>:
      The value
        #S(ELPREP::BUFFERED-ASCII-INPUT-STREAM
           :INDEX 0
           :LIMIT 8192
           :BUFFER #(69 82 82 49 57 52 49 52 55 46 53 53 ...)
           :SECONDARY-BUFFER NIL
           :ELEMENT-TYPE BASE-CHAR
           :STREAM #<SB-SYS:FD-STREAM for "file /dev/fd/0" {10038C60C3}>)
      is not of type
        STREAM.
    
    Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
    
    (no restarts: If you didn't do this on purpose, please report it as a bug.)
    
    (ELPREP:PARSE-SAM-HEADER #<unavailable argument>) [tl,external]
    0] 
    
  • exit status 2

    exit status 2

    Hi, I run elprep. However I couldn't get a vcf file.

    $ elprep sfm Sample.Aligned.sortedByCoord.out.bam Sample.output.bam --mark-duplicates --mark-optical-duplicates Sample.output.metrics --sorting-order coordinate --bqsr Sample.output.recal --known-sites dbsnp_138.hg19.elsites,Mills_and_1000G_gold_standard.indels.hg19.elsites --reference ucsc.hg19.elfasta --haplotypecaller Sample.output.vcf.gz
    
    elprep version 5.0.2 compiled with go1.16.4 - see http://github.com/exascience/elprep for more information.
    
    2021/06/30 19:13:41 Created log file at /home/nmc-ei/logs/elprep/elprep-2021-06-30-19-13-41-430755928-JST.log
    2021/06/30 19:13:41 Command line: [elprep sfm Sample.Aligned.sortedByCoord.out.bam Sample.output.bam --mark-duplicates --mark-optical-duplicates Sample.output.metrics --sorting-order coordinate --bqsr Sample.output.recal --known-sites dbsnp_138.hg19.elsites,Mills_and_1000G_gold_standard.indels.hg19.elsites --reference ucsc.hg19.elfasta --haplotypecaller Sample.output.vcf.gz]
    2021/06/30 19:13:41 Executing command:
     elprep sfm Sample.Aligned.sortedByCoord.out.bam Sample.output.bam --mark-duplicates --mark-optical-duplicates Sample.output.metrics --optical-duplicates-pixel-distance 100 --bqsr Sample.output.recal --reference ucsc.hg19.elfasta --quantize-levels 0 --max-cycle 500 --known-sites dbsnp_138.hg19.elsites,Mills_and_1000G_gold_standard.indels.hg19.elsites --haplotypecaller Sample.output.vcf.gz --sorting-order coordinate --intermediate-files-output-prefix Sample.Aligned.sortedByCoord.out --intermediate-files-output-type bam
    2021/06/30 19:13:41 Splitting...
    2021/06/30 19:15:39 Filtering (phase 1)...
    2021/06/30 19:16:46 exit status 2
    

    best and thanks for this utility,

    eisuke

  • Large amount of diskspace required

    Large amount of diskspace required

    We are running a few benchmarks on the MarkDuplicates features of ElPrep (v4.1.5) on WGS samples in isolation.

    We started with a 'small' WGS sample with a mean coverage of 41 (BAM: 60GB - Mean Cov. 41+/-15, Median 43). Your paper would suggest it required 1.6 times that of GATK/Picard. We found it needed nearly 4.5 times the space (~550 GB).

    A large WGS sample with a mean coverage of 70 (BAM: 101GB - Mean Cov. 70+/-24, Median 74) needed 4.7 times the disk space (~950GB).

    The scaling means that we would not be able to use ElPrep on larger WGS samples (120X). We replicated the call on two different compute clusters. Both calls had the same required disk space. Is there something we are missing?

    Call

    /usr/bin/time --verbose elprep sfm "$local_file_path" "$output_file" \
        --mark-duplicates \
        --mark-optical-duplicates "$metrics_file" \
        --optical-duplicates-pixel-distance 2500 \
        --sorting-order keep \
        --log-path $(pwd) \
        --nr-of-threads 12 \
        --tmp-path $TMPDIR \
        --timed
    

    Method

    A du --bytes $TMPDIR ran in the background with an interval of 5 minutes in order to determine the maximum used disk storage.

  • `elprep split` output directory

    `elprep split` output directory

    Hi,

    It seems like split doesn't correctly parse the output directory given on the command line

    $ elprep split input test --nr-of-threads 2 --output-prefix test                                                                             
    2022/04/13 09:36:14 Given output path is not a path: test.
    

    or

    $ elprep split input ./test --nr-of-threads 2 --output-prefix test 
    2022/04/13 09:36:22 Given output path is not a path: ./test.
    

    or even

    $ elprep split input $PWD/test --nr-of-threads 2 --output-prefix test 
    2022/04/13 09:37:08 Given output path is not a path: /projects/exome/development/elprep/test.
    
    $ elprep split input /projects/exome/development/elprep/test --nr-of-threads 2 --output-prefix test 
    2022/04/13 09:37:08 Given output path is not a path: /projects/exome/development/elprep/test.
    

    What does work is

    $ elprep split input . --nr-of-threads 2 --output-prefix test 
    

    The test directory does not exist yet, but according to the docs it should be made by elprep

    Cheers Matthias

  • Support for multiple input files

    Support for multiple input files

    Hi,

    I was wondering if it would be possible to add support for multiple input files. This way elprep could be used to merge multiple split bam files into a single final bam. This approach is also used by GATK.

    Thanks M

  • panic: runtime error: index out of range

    panic: runtime error: index out of range

    I was trying to run elprep with most of the filters on:

    2019/01/31 09:59:09 Executing command:
     elprep filter /data/input/DX123_HFJ5KDSXX_L2.sam /data/output/DX123_HFJ5KDSXX_L2.bam --mark-duplicates --mark-optical-duplicates /data/output/DX123_HFJ5KDSXX_L2.opticaldups.txt --optical-duplicates-pixel-distance 100 --remove-duplicates --bqsr /data/output/DX123_HFJ5KDSXX_L2.bqsr.txt --bqsr-reference /data/reference/hg19/ucsc_hg19.fa.elprep --quantize-levels 0 --sorting-order coordinate --nr-of-threads 30 --log-path /data/output
    panic: runtime error: index out of range
    goroutine 166040 [running]:
    runtime/debug.Stack(0xdcc8256380, 0x0, 0xc00011e700)
    	/opt/local/lib/go/src/runtime/debug/stack.go:24 +0xa7
    github.com/exascience/pargo/internal.WrapPanic(0x5aa000, 0x741450, 0x741450, 0xe1cf34ab)
    	/Users/caherzee/go/pkg/mod/github.com/exascience/[email protected]/internal/internal.go:41 +0x45
    github.com/exascience/pargo/parallel.RangeReduce.func1.1.1(0xd559ba65a0, 0xd55ce65a50)
    	/Users/caherzee/go/pkg/mod/github.com/exascience/[email protected]/parallel/parallel.go:803 +0x43
    panic(0x5aa000, 0x741450)
    	/opt/local/lib/go/src/runtime/panic.go:513 +0x1b9
    github.com/exascience/elprep/v4/filters.computeSnpEvents(0xd0c194a3c0, 0x0, 0x0, 0x0, 0xd0c1933800, 0x1e, 0x100, 0x4, 0x1e, 0x100)
    	/Users/caherzee/Documents/Work/Code/elprep/filters/bqsr.go:326 +0x3b3
    github.com/exascience/elprep/v4/filters.(*BaseRecalibrator).Recalibrate.func2(0x457055b, 0x469da0b, 0xc00004a220, 0xc00004aca0)
    	/Users/caherzee/Documents/Work/Code/elprep/filters/bqsr.go:952 +0x65a
    github.com/exascience/pargo/parallel.RangeReduce.func1(0x457055b, 0x469da0b, 0x1, 0xd55ce65a50, 0x96)
    	/Users/caherzee/go/pkg/mod/github.com/exascience/[email protected]/parallel/parallel.go:789 +0x2ba
    github.com/exascience/pargo/parallel.RangeReduce.func1.1(0xd559ba65a0, 0xd55ce65a50, 0xd0c21c34e8, 0x457055b, 0x469da0b, 0x2, 0x1, 0xd559ba6590)
    	/Users/caherzee/go/pkg/mod/github.com/exascience/[email protected]/parallel/parallel.go:806 +0x83
    created by github.com/exascience/pargo/parallel.RangeReduce.func1
    	/Users/caherzee/go/pkg/mod/github.com/exascience/[email protected]/parallel/parallel.go:801 +0x1d2
    

    The sam file was created by mapping raw reads to reference using bwa Reference was generated by elprep fasta-to-elfasta

    >uname -a
    Linux hostname 2.6.32-696.28.1.el6.x86_64 #1 SMP Wed May 9 23:09:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
    

    Any ideas?

    UPDATE: It seemed there is something wrong with snp event computing, so I tried to pass to --known-sites with elsites compiled from dbsnp by elprep vcf-to-elsites, which didn't help.

  • Feature discussion: avoiding samtools calls

    Feature discussion: avoiding samtools calls

    Hi,

    To further improve performance, would it be a good choice to avoid shelling out to samtools by using a go native HTSlib implementation? A quick google search brought https://github.com/biogo/hts to the surface.

    Do you suppose it's viable to integrate this into elprep?

    Cheers M

  • elprep merge: add input for intermediate files

    elprep merge: add input for intermediate files

    filter can generate a bunch of intermediate files on the outputs from elprep split for the sfm use case. However, merge has no option to actually input those intermediates for merging.

    Also, the Is there an option to add that?

    Thanks Matthias

  • header tag differs between runs

    header tag differs between runs

    Hi This is a minor issue but it seems the order of the tags in the header changes between 2 runs of elprep filter. This causes the md5sum to be different when comparing tests. Could you sort the tags so the order stays the same?

    Thanks M

    matdsmet:tmpqdm3s0ro $ diff 61/b91ed7ecaa51be82e9859e6ca0bcc4/test.sam a9/8bbdf3f2b5b85947daab90527d4e47/test.sam
    1c1
    < @HD   SO:coordinate   VN:1.6
    ---
    > @HD   VN:1.6  SO:coordinate
    3c3
    < @RG   ID:1    PU:1    SM:testN        LB:testN     PL:illumina
    ---
    > @RG   LB:testN        PL:illumina     ID:1    PU:1 SM:testN
    
  • Add wiki page about

    Add wiki page about "creating your own `sfm` script"

    Hi,

    Readme says there should be a wiki page about creating your own sfm scripts, but the wiki is empty. Can this be added? We're looking into creating our own sfm for a more horizontal spread of the load.

    Thanks M

  • Proper running using sam input file?

    Proper running using sam input file?

    Hi,

    I read that elprep can work with .sam file input, and since it d oes coordinate sorting, I just mapped my reads to the elfasta converted reference, and used the first .sam file as input. I am a little confused/concerned whether the final vcf would be correct due to the job log. I used the following code: elprep sfm AL91.sam AL91.output.bam --filter-unmapped-reads --nr-of-threads 28 --tmp-path $TMPDIR
    --mark-duplicates --mark-optical-duplicates AL91.metrics
    --sorting-order coordinate
    --bqsr AL91.recal
    --reference /users/PHS0338/jpac1984/data/myse-hapog.elfasta
    --haplotypecaller AL91.vcf.gz

    and the log- I thought for proper variant calling it had to first convert/sort the .sam and then split. It has been ~16 hours and the only output is the AL91.recal and not a AL91.metrics out.

    Here is the log. elprep version 5.1.1 compiled with go1.16.7 - see http://github.com/exascience/elprep for more information.

    2022/01/20 20:44:07 Created log file at /users/PHS0338/jpac1984/logs/elprep/elprep-2022-01-20-20-44-07-250202704-EST.log 2022/01/20 20:44:07 Command line: [elprep sfm AL91.sam AL91.output.bam --filter-unmapped-reads --nr-of-threads 28 --tmp-path /tmp/slurmtmp.17532726 --mark-duplicates --mark-optical-duplicates AL91.metrics --sorting-order coordinate --bqsr AL91.recal --reference /users/PHS0338/jpac1984/data/myse-hapog.elfasta --haplotypecaller AL91.vcf.gz] 2022/01/20 20:44:07 Executing command: elprep sfm AL91.sam AL91.output.bam --filter-unmapped-reads --mark-duplicates --mark-optical-duplicates AL91.metrics --optical-duplicates-pixel-distance 100 --bqsr AL91.recal --reference /users/PHS0338/jpac1984/data/myse-hapog.elfasta --quantize-levels 0 --max-cycle 500 --haplotypecaller AL91.vcf.gz --sorting-order coordinate --nr-of-threads 28 --tmp-path /tmp/slurmtmp.17532726 --intermediate-files-output-prefix AL91 --intermediate-files-output-type sam 2022/01/20 20:44:07 Splitting... 2022/01/20 21:01:22 Filtering (phase 1)... 2022/01/20 21:29:00 Filtering (phase 2) and variant calling...

    Hopefully, I am doing the proper procedure and not wasting time.

    Best regards;

    Juan

  • "exit status 2" error

    Hey,

    It's really a exciting variant calling tool. I would like to use for aligners benchmark. But I ran it on SSD and always got an error "exit status 2". I attached the log and the commands. Thanks.

    Log:

    2021/12/10 12:47:43 Splitting...
    2021/12/10 12:54:06 Filtering (phase 1)...
    2021/12/10 13:05:51 Filtering (phase 2) and variant calling...
    2021/12/10 13:05:51 exit status 2
    

    Command:

    #1.	ref 
    elprep fasta-to-elfasta /ssd-path-to-folder/data/hg37.fna \
    /ssd-path-to-folder/data/elprep/hg37.elfasta
     
    #2.	sites 
    elprep vcf-to-elsites /ssd-path-to-folder/yan/variant-call/v2.19-out/truth_snp.recode.vcf \
    /ssd-path-to-folder/data/elprep/hg37_snp.elsites
     
    elprep vcf-to-elsites /ssd-path-to-folder/yan/variant-call/v2.19-out/truth_indels.recode.vcf \
    /ssd-path-to-folder/data/elprep/hg37_indels.elsites
     
    #3.	variant calling
    elprep sfm /ssd-path-to-folder/yan/NA12878/accalign.mason.bam \
    /ssd-path-to-folder/yan/NA12878/NA12878.output.bam \
    --mark-duplicates --mark-optical-duplicates /ssd-path-to-folder/yan/NA12878/NA12878.output.metrics \
    --sorting-order coordinate \
    --bqsr /ssd-path-to-folder/yan/NA12878/NA12878.output.recal \
    --known-sites /ssd-path-to-folder/data/elprep/hg37_indels.elsites,/ssd-path-to-folder/data/elprep/hg37_snp.elsites \
    --reference /ssd-path-to-folder/data/elprep/hg37.elfasta \
    --haplotypecaller accalign.gatk.vcf.gz
    
Related tags
I'm sick. And... fibonacci sequence.
I'm sick. And... fibonacci sequence.

Fibonacci Sequence Like I said, Fibonnaci Sequence. Be happy that I didn't make any more fuss about this "achievement" (it's not, for anyone) Source c

Oct 18, 2021
A Github action to check if IDT could synthesize a given DNA sequence.

dna-is-synthesizable A github action to check if a part is synthesizable from a given Genbank file. dna-is-synthesizable is a Github Action that recei

Oct 28, 2021
seqinfo gathers image sequence info from directories.

seqinfo seqinfo gathers info from sequences in directories, and prints or writes it to an excel file. Usage seqinfo /path/to/search/sequences For adv

Dec 23, 2021
Snowflake algorithm generation worker Id sequence

sequence snowflake algorithm generation worker Id sequence 使用雪花算法生成ID,生成100万个只需要

Jan 21, 2022
Package reservoir samples values uniformly at random from an unbounded sequence of inputs

Package reservoir samples values uniformly at random from an unbounded sequence of inputs

Oct 5, 2022
Application to learn and demo Tekton pipelines

Tekton sample Application to learn and demo Tekton pipelines Building $ go test ./pkg/api && go build Running it locally $ podman-compose up --force-r

Oct 28, 2021
A small web dashboard with stats for all pipelines of Buildkite organization.
A small web dashboard with stats for all pipelines of Buildkite organization.

Buildkite Stats A small Buildkite dashboard useful to prioritize which pipelines a Buildkite organization is waiting the most on. Noteworthy details:

Apr 25, 2022
K3ai Executor is the runner pod to execute the "one-click" pipelines
K3ai Executor is the runner pod to execute the

Welcome to K3ai Project K3ai is a lightweight tool to get an AI Infrastructure Stack up in minutes not days. NOTE on the K3ai origins Original K3ai Pr

Nov 11, 2021
Nune - High-performance numerical engine based on generic tensors

Nune (v0.1) Numerical engine is a library for performing numerical computation i

Nov 9, 2022
Nune-go - High-performance numerical engine based on generic tensors

Nune (v0.1) Numerical engine is a library for performing numerical computation i

Nov 9, 2022
An easy-to-use Map Reduce Go parallel-computing framework inspired by 2021 6.824 lab1. It supports multiple workers on a single machine right now.

MapReduce This is an easy-to-use Map Reduce Go framework inspired by 2021 6.824 lab1. Feature Multiple workers on single machine right now. Easy to pa

Dec 5, 2022
An implementation of the consensus algorithm Map Reduce.

An implementation of the consensus algorithm Map Reduce. Framework written by Professor Matthew. Implemented by Makara Teu.

Jul 8, 2022
Onmap - Go package onmap puts pins on a world map image

onmap Go package onmap puts pins on a world map image. The images (mercator.jpg,

Feb 3, 2022
The forgotten go tool that executes and caches binaries included in go.mod files.
The forgotten go tool that executes and caches binaries included in go.mod files.

The forgotten go tool that executes and caches binaries included in go.mod files. This makes it easy to version cli tools in your projects such as gol

Sep 27, 2022
Tool to easily rename or move a bunch of files with a text editor of your choice
Tool to easily rename or move a bunch of files with a text editor of your choice

batch-rename With batch-rename you can utilize your favorite text editor to rename or move a bunch of files at once. It doesn't come with any features

Nov 2, 2022
Go language interface to the PAPI performance API

go-papi Description go-papi provides a Go interface to PAPI, the Performance Application Programming Interface. PAPI provides convenient access to har

Dec 22, 2022
🦔 semver and constraint parsing with a focus on performance

semver ?? semver and constraint parsing with a focus on performance semver provides semantic version and constraint parsing, comparison, and testing.

Dec 1, 2021
Exercise for solve problem data processing, performance and something wrong in passing data

Citcall Exercise Exercise for solve problem data processing, performance and something wrong in passing data Pengolahan data data processing - Readme

Nov 25, 2021
wkhtmltopdf Go bindings and high level interface for HTML to PDF conversion
wkhtmltopdf Go bindings and high level interface for HTML to PDF conversion

wkhtmltopdf Go bindings and high level interface for HTML to PDF conversion. Implements wkhtmltopdf Go bindings. It can be used to convert HTML docume

Dec 17, 2022