With the changes in the Matrix class, another round of changes to the ODE solver. The Runge Kutta function has been primarily changed to remove all object constructors and object returns in the functions. The new solver functions is:

The full code is:
## Sunday, December 30, 2012

### Matrices - II

A couple of changes to the matrix.py module. The ODE solver gave me the idea of how important it was to cut down on copy constructors and destructors. So there were two functions in the Matrix class that would come under that category and will be used almost everywhere.

The function added has been the "resize" function. So if a matrix exists but is now equated to another matrix of another dimension, the object will be modified rather than deleted and built again.

And the zeros function which would initialize self.data in the beginning and then rebuild it now will check whether the object exists. If it doesn't exist, it will initialize and fill it. If the new dimension is different from the existing dimensions, it will resize it.

The __init__ function calls the function zeros with the express arguments of the actual rows and columns of the matrix different from default rows and columns so that a resize takes place.

Finally, the __call__ function has been changed to check if the dimensions are different and resize if so otherwise just equate them.

The addition, subtraction and multiplication operations still need a constructor for the result and the functions will then return that result. But I will try to reduce the need for those in simulations but keep them for error checking as it makes it much easier to type out these operators as regular commands.

The function added has been the "resize" function. So if a matrix exists but is now equated to another matrix of another dimension, the object will be modified rather than deleted and built again.

And the zeros function which would initialize self.data in the beginning and then rebuild it now will check whether the object exists. If it doesn't exist, it will initialize and fill it. If the new dimension is different from the existing dimensions, it will resize it.

The __init__ function calls the function zeros with the express arguments of the actual rows and columns of the matrix different from default rows and columns so that a resize takes place.

Finally, the __call__ function has been changed to check if the dimensions are different and resize if so otherwise just equate them.

The addition, subtraction and multiplication operations still need a constructor for the result and the functions will then return that result. But I will try to reduce the need for those in simulations but keep them for error checking as it makes it much easier to type out these operators as regular commands.

## Saturday, December 29, 2012

### ODE Solver - II

So my previous post wasn't the last post of 2012. The streets are full of snow and the cold is keeping me indoors with nothing else to do but code. A couple of changes to the previous solver program:

The changes are a couple of bug fixes. In the function "mat_equation"

matrix_x.data[c1][0]=0.0

has been added as the result vector x has to be initialized to zero if the input is a vector. Similarly, in function "mat_ode"

ddt_mat.data[c1][0]=0.0

has been added to initialize value of dx/dt if input is a vector.

A major change has been to be able to solve the ODE when matrix E is singular. In the case, for an nXn matrix E, the rank will be k<n. This means that after the row operations, the lowest (n-k) rows will be zero rows. But if the diagonal elements of matrix A of these rows are not zero, the variable could still be calculated as that what this code does:

The concept here is that through the matrix A, these variables are still linked to the dynamic variables and so even if they are not directly affected by an input signal, they should be non-zero.

The changes are a couple of bug fixes. In the function "mat_equation"

matrix_x.data[c1][0]=0.0

has been added as the result vector x has to be initialized to zero if the input is a vector. Similarly, in function "mat_ode"

ddt_mat.data[c1][0]=0.0

has been added to initialize value of dx/dt if input is a vector.

A major change has been to be able to solve the ODE when matrix E is singular. In the case, for an nXn matrix E, the rank will be k<n. This means that after the row operations, the lowest (n-k) rows will be zero rows. But if the diagonal elements of matrix A of these rows are not zero, the variable could still be calculated as that what this code does:

The concept here is that through the matrix A, these variables are still linked to the dynamic variables and so even if they are not directly affected by an input signal, they should be non-zero.

## Friday, December 28, 2012

### Solving ODEs

One
of the major components in developing the circuit simulator is solving the
ordinary differential equation (ODE) “E dx/dt = Ax + bu”. Moreover, the
matrices need to be determined based on the network topology, values of
different components and finally in the case of power electronics, the state of
a device. So to begin with, a function to solve the ODE.

It works with the basic resistor, inductor, voltage source:

-------R1----L1---------------R3----L3------

| | |

| | |

| R2 |

V | |

| L2 |

| | |

| | |

|----------------------|-------------------------|

I hate drawing circuits. I draw so many of them while publishing papers, I can't get myself to draw for this blog. But I think this should give an idea of what I am simulating.

A few things to consider:

1. The code has been written with a consideration for simulation speed. For example, as far as possible, the arguments passed to functions are as references and are used as such. Copy contructors are not invoked and these matrices are changed in the function and reflect in the main program which calls the function.

The reason, if I were to make the functions black bloxes which accept inputs without modifying them and return outputs to be used in the larger algorithm, I would need to copy the matrices each time the function is called. This would mean, copy constructors and destructors called with each function call. A simulation would run for 1 second with a time step of 20 microseconds, which means hundreds of thousands of function calls with the same number of times the constuctors and destructors are called. This would give a normal desktop a severe bellyache.

So, by using references wherever possible, the simulation runs faster as there are minimum number of copy constructors.

2. The function call:

mat_ode(e, a, b, [curr_state_vec, next_state_vec], u, 20.0e-6, ode_solver)

Looks so neat but hides the fact that several vectors are grouped together and passed as a single object:

ode_solver=[ode_k1, ode_k2, ode_k3, ode_k4, ode_dbydt]

Here I must say that I like this feature of Python. You pass anything into a function and it is treated as an object. As long as you know what to do inside the function, you are OK. Moreover, here "ode_solver" is a mutable object - a list of lists (with lists)! So inside the function:

k1=runge_vectors[0]

Will help me unpack the different components.

3. This means I need to take a look a the matrix.py class definition and try to make sure constructors are avoided as far as possible.

4. The ODE solves assuming that E is non-singular. But a look at the circuit will tell you that if L2=L3=0, E will be non-singular but will still be a valid circuit with one variable having a d/dt. So the ODE solver will have to check for singularity and solve spearately as a special case.

Anyway, this my guess will be the last post in 2012.

It works with the basic resistor, inductor, voltage source:

-------R1----L1---------------R3----L3------

| | |

| | |

| R2 |

V | |

| L2 |

| | |

| | |

|----------------------|-------------------------|

I hate drawing circuits. I draw so many of them while publishing papers, I can't get myself to draw for this blog. But I think this should give an idea of what I am simulating.

A few things to consider:

1. The code has been written with a consideration for simulation speed. For example, as far as possible, the arguments passed to functions are as references and are used as such. Copy contructors are not invoked and these matrices are changed in the function and reflect in the main program which calls the function.

The reason, if I were to make the functions black bloxes which accept inputs without modifying them and return outputs to be used in the larger algorithm, I would need to copy the matrices each time the function is called. This would mean, copy constructors and destructors called with each function call. A simulation would run for 1 second with a time step of 20 microseconds, which means hundreds of thousands of function calls with the same number of times the constuctors and destructors are called. This would give a normal desktop a severe bellyache.

So, by using references wherever possible, the simulation runs faster as there are minimum number of copy constructors.

2. The function call:

mat_ode(e, a, b, [curr_state_vec, next_state_vec], u, 20.0e-6, ode_solver)

Looks so neat but hides the fact that several vectors are grouped together and passed as a single object:

ode_solver=[ode_k1, ode_k2, ode_k3, ode_k4, ode_dbydt]

Here I must say that I like this feature of Python. You pass anything into a function and it is treated as an object. As long as you know what to do inside the function, you are OK. Moreover, here "ode_solver" is a mutable object - a list of lists (with lists)! So inside the function:

k1=runge_vectors[0]

Will help me unpack the different components.

3. This means I need to take a look a the matrix.py class definition and try to make sure constructors are avoided as far as possible.

4. The ODE solves assuming that E is non-singular. But a look at the circuit will tell you that if L2=L3=0, E will be non-singular but will still be a valid circuit with one variable having a d/dt. So the ODE solver will have to check for singularity and solve spearately as a special case.

Anyway, this my guess will be the last post in 2012.

## Tuesday, December 25, 2012

### Matrix operations

So my first piece of code will be with respect to matrix operations. NumPy already has many in-built functions, but I still want to do this because I want to write my own code in the beginning rather than just use in-built components. So here goes matrix.py.

A few thoughts particularly with respect to my ex-darling C++.

1. There isn't the possibility of overloading the "=" assignment operator. So is I want "a" and "b" as arrays and I want to do:

a=b

This will only cause a reference "a" to point to the the object "b". A copy will not be made. So if I change "b" later, "a" changes too. Which is not what I want. So I need to define the __call__ method. This was I could do:

a(b)

But of course you would need to define "a" as a matrix before this is run.

2. Indentation is a requirement in Python. It improves the readability of code. True. But doing away with braces {}, or the begin/end statement blocks is probably going too far. Just a beginner's thought anyway.

3. Function/Operator overloading: In C++, you can overload a function any number of times with a different number of variables, different type of variables etc. So an overloaded function could accept a float or a string and two completely different functions can be written. In Python, everything is an object. So you can't specify float or Matrix before an argument in a function/method. You have to figure out what that is inside the functions. This I feel is unnecessary ambiguity.

I was trying to code the "*" oveload operator for matrices, and would like to have the possibility of multiplying two matrices or a matrix and a float/integer. But the only was to do that was inside the overloaded operator. A little googling told me that type checking was not a good idea. Probably because they want to have the possibility fo redefining the types or maybe defining additional super-types in later releases of Python. So the only thing I could use was exception handling. The code works but this isn't really an exception. This is a normal varation to the manner in which the operator is used on matrices. So seems odd.

4. Two more functions may be written depending on need - inverse and upper triangularization or lower triangularization. This will be decided later. The idea would be solve equations of the order Ax=b or E*d/dt(x)=Ax+bu. Using inverse to solve equations is usually avoided as matrix even close to singular will cause the simulation to blow up. So more on this later.

A few thoughts particularly with respect to my ex-darling C++.

1. There isn't the possibility of overloading the "=" assignment operator. So is I want "a" and "b" as arrays and I want to do:

a=b

This will only cause a reference "a" to point to the the object "b". A copy will not be made. So if I change "b" later, "a" changes too. Which is not what I want. So I need to define the __call__ method. This was I could do:

a(b)

But of course you would need to define "a" as a matrix before this is run.

2. Indentation is a requirement in Python. It improves the readability of code. True. But doing away with braces {}, or the begin/end statement blocks is probably going too far. Just a beginner's thought anyway.

3. Function/Operator overloading: In C++, you can overload a function any number of times with a different number of variables, different type of variables etc. So an overloaded function could accept a float or a string and two completely different functions can be written. In Python, everything is an object. So you can't specify float or Matrix before an argument in a function/method. You have to figure out what that is inside the functions. This I feel is unnecessary ambiguity.

I was trying to code the "*" oveload operator for matrices, and would like to have the possibility of multiplying two matrices or a matrix and a float/integer. But the only was to do that was inside the overloaded operator. A little googling told me that type checking was not a good idea. Probably because they want to have the possibility fo redefining the types or maybe defining additional super-types in later releases of Python. So the only thing I could use was exception handling. The code works but this isn't really an exception. This is a normal varation to the manner in which the operator is used on matrices. So seems odd.

4. Two more functions may be written depending on need - inverse and upper triangularization or lower triangularization. This will be decided later. The idea would be solve equations of the order Ax=b or E*d/dt(x)=Ax+bu. Using inverse to solve equations is usually avoided as matrix even close to singular will cause the simulation to blow up. So more on this later.

## Friday, December 21, 2012

### The beginning

Well, not really the beginning. Started this two years ago but had to stop because I was too busy. Besides, I was in a company where a technical blog could have been a conflict of interest. Now back to a university setting, I can restart this extremely interesting activity.

I started coding for simulations in the year 2007. To some extent, the real reason was I was more or less fed up with the commercial circuit simulators. They were expensive and worse buggy. Another problem was that simulating complex circuits for long duration led to programs that ran for a long time. In a Windows setting that essentially means your computer is unusable during the simulation and sometimes need a restart after the simulations ends.

When I first started coding in C, the pain was incredible. The smallest blocks that were available in simulators had to be coded and then made into usable functions. But as the complexity of the circuits grew, I found that circuits with four or five converters would be simulated with never the fear of crashing. I could run six C programs simultaneously in independent terminals in Linux which could take hours and at the same time use my computer as if nothing was running in the background.

This led me to convert my C code to C++. An immensely rewarding experience that finally ended with me completing my PhD. Anyway, since then I started learning Python. Python simplifies life to some extent with in-built functions. But most importantly, it is free and open source. With the rapid development taking place in Python, I thought this might be the right time to jump in.

One of the mistakes I made while coding in C/C++ was that I never documented my steps. At that time, I wasn't into blogging. Well now I am. So here goes. I am going to start coding in Python with the objective of building a circuit simulator specifically for power electronics. This blog is going to be free-flowing, so it sometimes may seem like to ramblings of a drunkard. Well I want it to be so.

So this begins this journey of a tech blog. I hope this lasts as long as I am in university as a post doctoral researcher which I hope will be for the next two years. So God save me and God save those who read this blog :)

I started coding for simulations in the year 2007. To some extent, the real reason was I was more or less fed up with the commercial circuit simulators. They were expensive and worse buggy. Another problem was that simulating complex circuits for long duration led to programs that ran for a long time. In a Windows setting that essentially means your computer is unusable during the simulation and sometimes need a restart after the simulations ends.

When I first started coding in C, the pain was incredible. The smallest blocks that were available in simulators had to be coded and then made into usable functions. But as the complexity of the circuits grew, I found that circuits with four or five converters would be simulated with never the fear of crashing. I could run six C programs simultaneously in independent terminals in Linux which could take hours and at the same time use my computer as if nothing was running in the background.

This led me to convert my C code to C++. An immensely rewarding experience that finally ended with me completing my PhD. Anyway, since then I started learning Python. Python simplifies life to some extent with in-built functions. But most importantly, it is free and open source. With the rapid development taking place in Python, I thought this might be the right time to jump in.

One of the mistakes I made while coding in C/C++ was that I never documented my steps. At that time, I wasn't into blogging. Well now I am. So here goes. I am going to start coding in Python with the objective of building a circuit simulator specifically for power electronics. This blog is going to be free-flowing, so it sometimes may seem like to ramblings of a drunkard. Well I want it to be so.

So this begins this journey of a tech blog. I hope this lasts as long as I am in university as a post doctoral researcher which I hope will be for the next two years. So God save me and God save those who read this blog :)

Subscribe to:
Posts (Atom)