The probability of reaching the absorbing states from a particular transient state?Plotting absorbing state...
I have trouble understanding this fallacy: "If A, then B. Therefore if not-B, then not-A."
How to deal with possible delayed baggage?
Is there any risk in sharing info about technologies and products we use with a supplier?
Does the ditching switch allow an A320 to float indefinitely?
Can we "borrow" our answers to populate our own websites?
Could an Apollo mission be possible if Moon would be Earth like?
What happens when the wearer of a Shield of Missile Attraction is behind total cover?
"Starve to death" Vs. "Starve to the point of death"
Has Britain negotiated with any other countries outside the EU in preparation for the exit?
Is there a lava-breathing lizard creature (that could be worshipped by a cult) in 5e?
False written accusations not made public - is there law to cover this?
Citing paid articles from illegal web sharing
Does it take energy to move something in a circle?
In Linux what happens if 1000 files in a directory are moved to another location while another 300 files were added to the source directory?
How to access internet and run apt-get through a middle server?
Calculate the true diameter of stars from photographic plate
How can I find y?
What game did these black and yellow dice come from?
What senses are available to a corpse subjected to a Speak with Dead spell?
Non-Cancer terminal illness that can affect young (age 10-13) girls?
Why avoid shared user accounts?
Why is 'diphthong' pronounced the way it is?
Why maximum length of IP, TCP, UDP packet is not suit?
How do I prevent a homebrew Grappling Hook feature from trivializing Tomb of Annihilation?
The probability of reaching the absorbing states from a particular transient state?
Plotting absorbing state probabilities from state 1Defining a function of a distributionNicely illustrating the evolution and end-state of a discrete-time Markov chainHow to obtain the number of Markov Chain transitions in a simulation?Arranging “ranked” nodes of a graph symmetricallyState “i” goes to state “j”: list accessible states in a Markov-chainPart specification error with InterpolatingFunction when generating a Markov Modulated Poisson ProcessUsing Mathematica to calculate expected time to absorptionHidden Markov Model: emissions probabilities dependent on observable parameterEstimate process parameters of geometric Brownian motion with a two-state Markov chainPlotting absorbing state probabilities from state 1
$begingroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
$endgroup$
add a comment |
$begingroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
$endgroup$
$begingroup$
Do you mean something like this?StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
3 hours ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
2 hours ago
add a comment |
$begingroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
$endgroup$
Can I use the data available from MarkovProcessProperties to compute the probability of reaching each of the absorbing states from a particular transient state?
In an earlier post, kglr showed a solution involving the probabilities from State 1. Can that solution be amended easily to compute the probabilities from any of the transient states?
markov-chains markov-process
markov-chains markov-process
edited 3 hours ago
user120911
asked 3 hours ago
user120911user120911
67228
67228
$begingroup$
Do you mean something like this?StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
3 hours ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
2 hours ago
add a comment |
$begingroup$
Do you mean something like this?StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
3 hours ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
2 hours ago
$begingroup$
Do you mean something like this?
StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
3 hours ago
$begingroup$
Do you mean something like this?
StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
3 hours ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
2 hours ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
2 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "387"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192241%2fthe-probability-of-reaching-the-absorbing-states-from-a-particular-transient-sta%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
add a comment |
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
add a comment |
$begingroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
$endgroup$
proc = DiscreteMarkovProcess[1, {{0., 0.5, 0., 0., 0.5, 0., 0., 0., 0., 0.},
{0., 0., 0.5, 0., 0., 0.5, 0., 0., 0., 0.},
{0., 0., 0., 0.5, 0., 0., 0.5, 0., 0., 0.},
{0., 0., 0., 1., 0., 0., 0., 0., 0., 0.},
{0., 0., 0., 0., 0., 0.5, 0., 0.5, 0., 0.},
{0., 0., 0., 0., 0., 0., 0.5, 0., 0.5, 0.},
{0., 0., 0., 0., 0., 0., 1., 0., 0., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0.5, 0.5},
{0., 0., 0., 0., 0., 0., 0., 0., 1., 0.},
{0., 0., 0., 0., 0., 0., 0., 0., 0., 1.}}];
Graph[proc]
{tr, ab, ltm} = MarkovProcessProperties[proc, #] & /@
{ "TransientClasses", "AbsorbingClasses", "LimitTransitionMatrix"};
TeXForm @ TableForm[ltm[[Flatten@tr, Flatten@ab]],
TableHeadings -> {Flatten@tr, Flatten@ab}]
$begin{array}{ccccc}
& 4 & 7 & 9 & 10 \
3 & 0.5 & 0.5 & 0. & 0. \
6 & 0. & 0.5 & 0.5 & 0. \
2 & 0.25 & 0.5 & 0.25 & 0. \
8 & 0. & 0. & 0.5 & 0.5 \
5 & 0. & 0.25 & 0.5 & 0.25 \
1 & 0.125 & 0.375 & 0.375 & 0.125 \
end{array}$
edited 2 hours ago
answered 2 hours ago
kglrkglr
186k10202421
186k10202421
add a comment |
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192241%2fthe-probability-of-reaching-the-absorbing-states-from-a-particular-transient-sta%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Do you mean something like this?
StationaryDistribution[DiscreteMarkovProcess[{1,0,0},{{0,1/2,1/2},{0,1,0},{0,0,1}}]]
$endgroup$
– Sjoerd Smit
3 hours ago
$begingroup$
I am looking for a solution like the one shown by kglr in the link, but which is more dynamic because it offers the possibility of specifying the particular transient state to be examined.
$endgroup$
– user120911
2 hours ago