Where did this useful matrix decomposition come from for Nodal Analysis? Announcing the...

What are the main differences between the original Stargate SG-1 and the Final Cut edition?

Where did this useful matrix decomposition come from for Nodal Analysis?

As a dual citizen, my US passport will expire one day after traveling to the US. Will this work?

A proverb that is used to imply that you have unexpectedly faced a big problem

Why are vacuum tubes still used in amateur radios?

Why is a lens darker than other ones when applying the same settings?

Central Vacuuming: Is it worth it, and how does it compare to normal vacuuming?

Tips to organize LaTeX presentations for a semester

Why not send Voyager 3 and 4 following up the paths taken by Voyager 1 and 2 to re-transmit signals of later as they fly away from Earth?

What is the difference between a "ranged attack" and a "ranged weapon attack"?

Is there public access to the Meteor Crater in Arizona?

What is the origin of 落第?

Can an iPhone 7 be made to function as a NFC Tag?

Is it dangerous to install hacking tools on my private linux machine?

Delete free apps from library

What does 丫 mean? 丫是什么意思?

The test team as an enemy of development? And how can this be avoided?

Why weren't discrete x86 CPUs ever used in game hardware?

Did any compiler fully use 80-bit floating point?

What initially awakened the Balrog?

What does Turing mean by this statement?

How can I save and copy a screenhot at the same time?

Random body shuffle every night—can we still function?

Google .dev domain strangely redirects to https



Where did this useful matrix decomposition come from for Nodal Analysis?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Ebers–Moll model aF? Where does it come from?Singular matrix in nodal analysis?Does the resistor values used for opamp circuits come from the equations?Finding current over controllable voltage source using nodal analysisDetermine Y-parameters for given circuit. [Used Nodal Analysis]





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







1












$begingroup$


Background



The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbf{S}$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
$$
mathbf{Sv} = mathbf{i}
$$

where $mathbf{v}$ is the collection of nodal voltages and $mathbf{i}$ are the input current sources.



Super useful matrix decomposition



In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
$$
mathbf{S} = mathbf{N G N}^mathrm{T}
$$

where $mathbf{N}$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbf{G}$ is an $mtimes m$ diagonal matrix containing the conductance values.



This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



Notes



The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.










share|improve this question









$endgroup$



















    1












    $begingroup$


    Background



    The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbf{S}$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
    $$
    mathbf{Sv} = mathbf{i}
    $$

    where $mathbf{v}$ is the collection of nodal voltages and $mathbf{i}$ are the input current sources.



    Super useful matrix decomposition



    In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
    $$
    mathbf{S} = mathbf{N G N}^mathrm{T}
    $$

    where $mathbf{N}$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbf{G}$ is an $mtimes m$ diagonal matrix containing the conductance values.



    This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



    Notes



    The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.










    share|improve this question









    $endgroup$















      1












      1








      1





      $begingroup$


      Background



      The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbf{S}$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
      $$
      mathbf{Sv} = mathbf{i}
      $$

      where $mathbf{v}$ is the collection of nodal voltages and $mathbf{i}$ are the input current sources.



      Super useful matrix decomposition



      In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
      $$
      mathbf{S} = mathbf{N G N}^mathrm{T}
      $$

      where $mathbf{N}$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbf{G}$ is an $mtimes m$ diagonal matrix containing the conductance values.



      This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



      Notes



      The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.










      share|improve this question









      $endgroup$




      Background



      The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbf{S}$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
      $$
      mathbf{Sv} = mathbf{i}
      $$

      where $mathbf{v}$ is the collection of nodal voltages and $mathbf{i}$ are the input current sources.



      Super useful matrix decomposition



      In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
      $$
      mathbf{S} = mathbf{N G N}^mathrm{T}
      $$

      where $mathbf{N}$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbf{G}$ is an $mtimes m$ diagonal matrix containing the conductance values.



      This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



      Notes



      The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.







      math nodal-analysis






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 6 hours ago









      loudnoisesloudnoises

      1,382920




      1,382920






















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          According to the document you linked, it appears to me that $mathbf{N}$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



          This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



          For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



          #include <stdio.h>
          #include <stdlib.h>
          typedef enum { false= 0, true= 1 } bool_t;
          void hamPrint( int n, int *path ) {
          int i;
          for ( i= 0; i < n; ++i )
          printf( " %d ", path[i] );
          printf( " %dn", path[0] );
          return;
          }
          bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos ) {
          int i;
          if ( graph[ path[pos-1]*n + v ] == false ) return false;
          for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
          return true;
          }
          bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos ) {
          int v;
          if ( pos == n )
          return graph[ path[pos-1]*n + path[0] ];
          for ( v= 1; v < n; ++v )
          if ( hamOkay( n, v, graph, path, pos ) ) {
          path[pos]= v;
          if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
          return true;
          path[pos]= -1;
          }
          return false;
          }
          bool_t hamCycleExist( int n, bool_t *graph ) {
          bool_t stat;
          int i, *path= (int *) malloc( sizeof(int) * n );
          if ( path == NULL ) return false;
          for ( i= 0; i < n; ++i )
          path[i]= -1;
          path[0]= 0;
          stat= hamCycleSolver( n, graph, path, 1 );
          if ( stat == true ) hamPrint( n, path );
          free( path );
          return stat;
          }
          bool_t graph[][5]= { /* Create the following graph */
          { 0, 1, 0, 1, 0 }, /* (0) (2) */
          { 1, 0, 1, 1, 1 }, /* | / | */
          { 0, 1, 0, 0, 1 }, /* | (1) | */
          { 1, 1, 0, 0, 1 }, /* | / | */
          { 0, 1, 1, 1, 0 }, /* (3)-----(4) */
          };
          int main( void ) {
          if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
          printf( "Graph is Hamiltoniann" );
          else
          printf( "Graph is not Hamiltoniann" );
          return 0;
          }


          Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



          If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



          In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
          Graph representations.



          I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






          share|improve this answer











          $endgroup$





















            1












            $begingroup$

            I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



            I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



            Consider a time-invariant, noiseless MIMO channel. This can be represented as.



            $
            mathbf{y} = H(omega)mathbf{x}
            $



            Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



            The SVD decomposes H into



            $
            H = ULambda V^*
            $



            Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



            Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



            This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



            $
            mathbf{y} = U^*ULambda V^*Vmathbf{x} \
            mathbf{y} = Lambda mathbf{x}
            $



            Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



            The SVD also has some interesting applications in image processing



            Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






            share|improve this answer









            $endgroup$














              Your Answer






              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("schematics", function () {
              StackExchange.schematics.init();
              });
              }, "cicuitlab");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "135"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f433641%2fwhere-did-this-useful-matrix-decomposition-come-from-for-nodal-analysis%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              According to the document you linked, it appears to me that $mathbf{N}$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



              This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



              For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



              #include <stdio.h>
              #include <stdlib.h>
              typedef enum { false= 0, true= 1 } bool_t;
              void hamPrint( int n, int *path ) {
              int i;
              for ( i= 0; i < n; ++i )
              printf( " %d ", path[i] );
              printf( " %dn", path[0] );
              return;
              }
              bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos ) {
              int i;
              if ( graph[ path[pos-1]*n + v ] == false ) return false;
              for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
              return true;
              }
              bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos ) {
              int v;
              if ( pos == n )
              return graph[ path[pos-1]*n + path[0] ];
              for ( v= 1; v < n; ++v )
              if ( hamOkay( n, v, graph, path, pos ) ) {
              path[pos]= v;
              if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
              return true;
              path[pos]= -1;
              }
              return false;
              }
              bool_t hamCycleExist( int n, bool_t *graph ) {
              bool_t stat;
              int i, *path= (int *) malloc( sizeof(int) * n );
              if ( path == NULL ) return false;
              for ( i= 0; i < n; ++i )
              path[i]= -1;
              path[0]= 0;
              stat= hamCycleSolver( n, graph, path, 1 );
              if ( stat == true ) hamPrint( n, path );
              free( path );
              return stat;
              }
              bool_t graph[][5]= { /* Create the following graph */
              { 0, 1, 0, 1, 0 }, /* (0) (2) */
              { 1, 0, 1, 1, 1 }, /* | / | */
              { 0, 1, 0, 0, 1 }, /* | (1) | */
              { 1, 1, 0, 0, 1 }, /* | / | */
              { 0, 1, 1, 1, 0 }, /* (3)-----(4) */
              };
              int main( void ) {
              if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
              printf( "Graph is Hamiltoniann" );
              else
              printf( "Graph is not Hamiltoniann" );
              return 0;
              }


              Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



              If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



              In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
              Graph representations.



              I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






              share|improve this answer











              $endgroup$


















                1












                $begingroup$

                According to the document you linked, it appears to me that $mathbf{N}$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



                This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



                For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



                #include <stdio.h>
                #include <stdlib.h>
                typedef enum { false= 0, true= 1 } bool_t;
                void hamPrint( int n, int *path ) {
                int i;
                for ( i= 0; i < n; ++i )
                printf( " %d ", path[i] );
                printf( " %dn", path[0] );
                return;
                }
                bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos ) {
                int i;
                if ( graph[ path[pos-1]*n + v ] == false ) return false;
                for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
                return true;
                }
                bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos ) {
                int v;
                if ( pos == n )
                return graph[ path[pos-1]*n + path[0] ];
                for ( v= 1; v < n; ++v )
                if ( hamOkay( n, v, graph, path, pos ) ) {
                path[pos]= v;
                if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
                return true;
                path[pos]= -1;
                }
                return false;
                }
                bool_t hamCycleExist( int n, bool_t *graph ) {
                bool_t stat;
                int i, *path= (int *) malloc( sizeof(int) * n );
                if ( path == NULL ) return false;
                for ( i= 0; i < n; ++i )
                path[i]= -1;
                path[0]= 0;
                stat= hamCycleSolver( n, graph, path, 1 );
                if ( stat == true ) hamPrint( n, path );
                free( path );
                return stat;
                }
                bool_t graph[][5]= { /* Create the following graph */
                { 0, 1, 0, 1, 0 }, /* (0) (2) */
                { 1, 0, 1, 1, 1 }, /* | / | */
                { 0, 1, 0, 0, 1 }, /* | (1) | */
                { 1, 1, 0, 0, 1 }, /* | / | */
                { 0, 1, 1, 1, 0 }, /* (3)-----(4) */
                };
                int main( void ) {
                if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
                printf( "Graph is Hamiltoniann" );
                else
                printf( "Graph is not Hamiltoniann" );
                return 0;
                }


                Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



                If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



                In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
                Graph representations.



                I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






                share|improve this answer











                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  According to the document you linked, it appears to me that $mathbf{N}$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



                  This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



                  For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



                  #include <stdio.h>
                  #include <stdlib.h>
                  typedef enum { false= 0, true= 1 } bool_t;
                  void hamPrint( int n, int *path ) {
                  int i;
                  for ( i= 0; i < n; ++i )
                  printf( " %d ", path[i] );
                  printf( " %dn", path[0] );
                  return;
                  }
                  bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos ) {
                  int i;
                  if ( graph[ path[pos-1]*n + v ] == false ) return false;
                  for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
                  return true;
                  }
                  bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos ) {
                  int v;
                  if ( pos == n )
                  return graph[ path[pos-1]*n + path[0] ];
                  for ( v= 1; v < n; ++v )
                  if ( hamOkay( n, v, graph, path, pos ) ) {
                  path[pos]= v;
                  if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
                  return true;
                  path[pos]= -1;
                  }
                  return false;
                  }
                  bool_t hamCycleExist( int n, bool_t *graph ) {
                  bool_t stat;
                  int i, *path= (int *) malloc( sizeof(int) * n );
                  if ( path == NULL ) return false;
                  for ( i= 0; i < n; ++i )
                  path[i]= -1;
                  path[0]= 0;
                  stat= hamCycleSolver( n, graph, path, 1 );
                  if ( stat == true ) hamPrint( n, path );
                  free( path );
                  return stat;
                  }
                  bool_t graph[][5]= { /* Create the following graph */
                  { 0, 1, 0, 1, 0 }, /* (0) (2) */
                  { 1, 0, 1, 1, 1 }, /* | / | */
                  { 0, 1, 0, 0, 1 }, /* | (1) | */
                  { 1, 1, 0, 0, 1 }, /* | / | */
                  { 0, 1, 1, 1, 0 }, /* (3)-----(4) */
                  };
                  int main( void ) {
                  if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
                  printf( "Graph is Hamiltoniann" );
                  else
                  printf( "Graph is not Hamiltoniann" );
                  return 0;
                  }


                  Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



                  If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



                  In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
                  Graph representations.



                  I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






                  share|improve this answer











                  $endgroup$



                  According to the document you linked, it appears to me that $mathbf{N}$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



                  This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



                  For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



                  #include <stdio.h>
                  #include <stdlib.h>
                  typedef enum { false= 0, true= 1 } bool_t;
                  void hamPrint( int n, int *path ) {
                  int i;
                  for ( i= 0; i < n; ++i )
                  printf( " %d ", path[i] );
                  printf( " %dn", path[0] );
                  return;
                  }
                  bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos ) {
                  int i;
                  if ( graph[ path[pos-1]*n + v ] == false ) return false;
                  for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
                  return true;
                  }
                  bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos ) {
                  int v;
                  if ( pos == n )
                  return graph[ path[pos-1]*n + path[0] ];
                  for ( v= 1; v < n; ++v )
                  if ( hamOkay( n, v, graph, path, pos ) ) {
                  path[pos]= v;
                  if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
                  return true;
                  path[pos]= -1;
                  }
                  return false;
                  }
                  bool_t hamCycleExist( int n, bool_t *graph ) {
                  bool_t stat;
                  int i, *path= (int *) malloc( sizeof(int) * n );
                  if ( path == NULL ) return false;
                  for ( i= 0; i < n; ++i )
                  path[i]= -1;
                  path[0]= 0;
                  stat= hamCycleSolver( n, graph, path, 1 );
                  if ( stat == true ) hamPrint( n, path );
                  free( path );
                  return stat;
                  }
                  bool_t graph[][5]= { /* Create the following graph */
                  { 0, 1, 0, 1, 0 }, /* (0) (2) */
                  { 1, 0, 1, 1, 1 }, /* | / | */
                  { 0, 1, 0, 0, 1 }, /* | (1) | */
                  { 1, 1, 0, 0, 1 }, /* | / | */
                  { 0, 1, 1, 1, 0 }, /* (3)-----(4) */
                  };
                  int main( void ) {
                  if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
                  printf( "Graph is Hamiltoniann" );
                  else
                  printf( "Graph is not Hamiltoniann" );
                  return 0;
                  }


                  Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



                  If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



                  In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
                  Graph representations.



                  I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited 5 hours ago

























                  answered 5 hours ago









                  jonkjonk

                  35.3k12876




                  35.3k12876

























                      1












                      $begingroup$

                      I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                      I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                      Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                      $
                      mathbf{y} = H(omega)mathbf{x}
                      $



                      Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                      The SVD decomposes H into



                      $
                      H = ULambda V^*
                      $



                      Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                      Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                      This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                      $
                      mathbf{y} = U^*ULambda V^*Vmathbf{x} \
                      mathbf{y} = Lambda mathbf{x}
                      $



                      Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                      The SVD also has some interesting applications in image processing



                      Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






                      share|improve this answer









                      $endgroup$


















                        1












                        $begingroup$

                        I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                        I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                        Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                        $
                        mathbf{y} = H(omega)mathbf{x}
                        $



                        Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                        The SVD decomposes H into



                        $
                        H = ULambda V^*
                        $



                        Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                        Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                        This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                        $
                        mathbf{y} = U^*ULambda V^*Vmathbf{x} \
                        mathbf{y} = Lambda mathbf{x}
                        $



                        Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                        The SVD also has some interesting applications in image processing



                        Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






                        share|improve this answer









                        $endgroup$
















                          1












                          1








                          1





                          $begingroup$

                          I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                          I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                          Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                          $
                          mathbf{y} = H(omega)mathbf{x}
                          $



                          Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                          The SVD decomposes H into



                          $
                          H = ULambda V^*
                          $



                          Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                          Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                          This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                          $
                          mathbf{y} = U^*ULambda V^*Vmathbf{x} \
                          mathbf{y} = Lambda mathbf{x}
                          $



                          Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                          The SVD also has some interesting applications in image processing



                          Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






                          share|improve this answer









                          $endgroup$



                          I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                          I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                          Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                          $
                          mathbf{y} = H(omega)mathbf{x}
                          $



                          Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                          The SVD decomposes H into



                          $
                          H = ULambda V^*
                          $



                          Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                          Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                          This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                          $
                          mathbf{y} = U^*ULambda V^*Vmathbf{x} \
                          mathbf{y} = Lambda mathbf{x}
                          $



                          Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                          The SVD also has some interesting applications in image processing



                          Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered 3 hours ago









                          jramsay42jramsay42

                          595127




                          595127






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Electrical Engineering Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f433641%2fwhere-did-this-useful-matrix-decomposition-come-from-for-nodal-analysis%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              “%fieldName is a required field.”, in Magento2 REST API Call for GET Method Type The Next...

                              How to change City field to a dropdown in Checkout step Magento 2Magento 2 : How to change UI field(s)...

                              變成蝙蝠會怎樣? 參考資料 外部連結 导航菜单Thomas Nagel, "What is it like to be a...