Is it 40% or 0.4%?
You'll find me clean when something is full
Can I become debt free or should I file for bankruptcy? How do I manage my debt and finances?
Must a tritone substitution use a dominant seventh chord?
How to mitigate "bandwagon attacking" from players?
Six real numbers so that product of any five is the sixth one
How to count words in a line
How do I construct an nxn matrix?
Can chords be played on the flute?
Did 5.25" floppies undergo a change in magnetic coating?
Borrowing Characters
Auto Insert date into Notepad
How to deny access to SQL Server to certain login over SSMS, but allow over .Net SqlClient Data Provider
Why do members of Congress in committee hearings ask witnesses the same question multiple times?
Pronunciation of powers
As a new poet, where can I find help from a professional to judge my work?
Is divide-by-zero a security vulnerability?
Which aircraft had such a luxurious-looking navigator's station?
Find the next monthly expiration date
Multiplication via squaring and addition
What is a term for a function that when called repeatedly, has the same effect as calling once?
What is this waxed root vegetable?
How to avoid being sexist when trying to employ someone to function in a very sexist environment?
What's the purpose of these copper coils with resistors inside them in A Yamaha RX-V396RDS amplifier?
Are small insurances worth it
Is it 40% or 0.4%?
$begingroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
$endgroup$
|
show 1 more comment
$begingroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
$endgroup$
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
4 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
4 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
4 hours ago
|
show 1 more comment
$begingroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
$endgroup$
A variable, which should contain percents, also contains some "ratio" values, for example:
0.61
41
54
.4
.39
20
52
0.7
12
70
82
The real distribution parameters are unknown but I guess it is unimodal with most (say over 70% of) values occurring between 50% and 80%, but it is also possible to see very low values (e.g., 0.1%).
Is there any formal or systematic approaches to determine the likely format in which each value is recorded (i.e., ratio or percent), assuming no other variables are available?
data-cleaning
data-cleaning
edited 4 hours ago
Orion
asked 4 hours ago
OrionOrion
5312
5312
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
4 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
4 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
4 hours ago
|
show 1 more comment
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
4 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
4 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
4 hours ago
1
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
4 hours ago
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
4 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
4 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
4 hours ago
1
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
4 hours ago
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
4 hours ago
|
show 1 more comment
2 Answers
2
active
oldest
votes
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
4 hours ago
2
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
4 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395626%2fis-it-40-or-0-4%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
add a comment |
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
add a comment |
$begingroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
$endgroup$
Assuming
- The only data you have is the percents/ratios (no other related explanatory variables)
- Your percents comes from a unimodel distribution $P$ and the ratios come from the same unimodal distribution $P$, but squished by $100$ (call it $P_{100}$).
- The percent/ratios are all between $0$ and $100$.
Then there's a single cutoff point $K$ (with $K < 1.0$ obviously) where everything under $K$ is more likely to be sampled from $P_{100}$ and everything over $K$ is more likely to be sampled from $P$.
You should be able to set up a maximum likelihood function with a binary parameter on each datapoint, plus any parameters of your chosen P.
Afterwards, find $K :=$ where $P$ and $P_{100}$ intersect and you can use that to clean your data.
In practice, just split your data 0-1 and 1-100, fit and plot both histograms and fiddle around with what you think $K$ is.
answered 3 hours ago
djmadjma
63947
63947
add a comment |
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
4 hours ago
2
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
4 hours ago
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
4 hours ago
2
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
4 hours ago
add a comment |
$begingroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
$endgroup$
Here's one method of determining whether your data are percents or proportions: if there are out-of-bounds values for a proportion (e.g. 52, 70, 82, 41, 54, to name a few) then they must be percents.
Therefore, your data must be percents. You're welcome.
answered 4 hours ago
beta1_equals_beta2beta1_equals_beta2
412
412
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
4 hours ago
2
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
4 hours ago
add a comment |
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
4 hours ago
2
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
4 hours ago
3
3
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
4 hours ago
$begingroup$
The issue is that the two are mixed together. It’s not all percents or all ratios/proportions. 49 is a percentage, but 0.49 could be either.
$endgroup$
– The Laconic
4 hours ago
2
2
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
4 hours ago
$begingroup$
If you can't assume there is a unified format for all of the rows, then the question is obviously unanswerable. In the absence of any other information, it's anyone's guess whether the 0.4 is a proportion of a percentage. I chose to answer the only possible answerable interpretation of the question.
$endgroup$
– beta1_equals_beta2
4 hours ago
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395626%2fis-it-40-or-0-4%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
I'm voting to close this question as off-topic because it is impossible to definitively answer. If you don't know what the data mean, how will strangers on the internet know?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Read it again. It is not about asking strangers on the Internet to guess the data mean.
$endgroup$
– Orion
4 hours ago
2
$begingroup$
What the data mean != what is the (data) mean.
$endgroup$
– Nick Cox
4 hours ago
$begingroup$
Oh, Ok. Correction: The question is not about asking strangers on the Internet what the data mean. Hooray.
$endgroup$
– Orion
4 hours ago
1
$begingroup$
You have 3 options: your big numbers are falsely big, and need a decimal in front; your small numbers are falsely small and need 100x multiplie; or your data is just fine. Why don't you plot the qqnorm of all three options?
$endgroup$
– EngrStudent
4 hours ago