-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scar_score() with allele-specific segmentation file fails with Error in [.data.frame
(seg, , 8) : undefined columns selected
#7
Comments
One add on: sample_name chromosome segment_start segment_end tcn nA nB ploidy I get:
|
scar_score processes one sample each time. |
I did the following:
This is the described input format for HRDscore:
1st column: sample name,
2nd column: chromosome,
3rd column: segmentation start,
4th column: segmentation end,
5th column: total copynumber,
6th column: copy number of A allele,
7th column: copy number of B allele
However in the downstream processing what you need is the same order as in the original method for SNP6 Ascat output:
1: sample id
2: chromosome (numeric)
3: segment start
4: segment end
5: number of probes
6: total copy number
7: nA
8: nB
9: ploidy
10: contamination, aberrant cell fraction
That worked for me
Von: zhjianli <notifications@github.com>
Gesendet: Dienstag, 26. Januar 2021 03:35
An: sztup/scarHRD <scarHRD@noreply.github.com>
Cc: Thomas Grombacher <Thomas.Grombacher@merckgroup.com>; Mention <mention@noreply.github.com>
Betreff: Re: [sztup/scarHRD] scar_score() with allele-specific segmentation file fails with Error in `[.data.frame`(seg, , 8) : undefined columns selected (#7)
[WARNING – EXTERNAL EMAIL] Do not open links or attachments unless you recognize the sender of this email. If you are unsure please click the button "Report suspicious email"
@ThomasGro<https://github.com/ThomasGro> I had exactly the same issue. Did you solve the problem eventually? Thanks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#7 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AENMOYMCAHPPGPKBZ52H6FDS3YS5JANCNFSM4KVFVVOQ>.
This message and any attachment are confidential and may be privileged or otherwise protected from disclosure. If you are not the intended recipient, you must not copy this message or attachment or disclose the contents to any other person. If you have received this transmission in error, please notify the sender immediately and delete the message and any attachment from your system. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not accept liability for any omissions or errors in this message which may arise as a result of E-Mail-transmission or for damages resulting from any unauthorized changes of the content of this message and any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its subsidiaries do not guarantee that this message is free of viruses and does not accept liability for any damages caused by any virus transmitted therewith.
Click http://www.merckgroup.com/disclaimer to access the German, French, Spanish and Portuguese versions of this disclaimer.
|
I had exactly the same issue. Did you solve the problem eventually? Thanks. |
Please look at my comment from Jan 26th. |
Did this work correctly? I think their test file only has 8 columns with "ploidy" in the end |
Hi,
I have generated a allele-specific segmentation file with CNVkit. I adapted the format to fit your input
"... allele-specific segmentation file with the following columns: 1st column: sample name, 2nd column: chromosome, 3rd column: segmentation start, 4th column: segmentation end, 5th column: total copynumber, 6th column: copy number of A allele, 7th column: copy number of B allele"
sample_name chromosome segment_start segment_end tcn nA nB
model1 1 62676830 62677251 30 19 11
model1 1 173836085 177898343 3 2 1
model1 2 677597 1217588 5 4 1
But I get the error:
It seems the function is expecting an additional column?!
Thank you for your support,
Thomas
The text was updated successfully, but these errors were encountered: