0:17

Just like when you do an experiment, there is

an error, because there are errors, because there is limitation.

You know, a specifications with which your

pipettes, how accurately you're going to pipette something.

How accurately your machine reads stuff.

There are errors in computation that one needs to sort of pay attention to.

There, and it's really important for to sort of, minimize these errors.

And there are three, sort of if you want to call them

or there are a couple of things that we need to consider.

It's really important that, the, these models or

cell biological systems will be all thermodynamic constraints.

Indeed, in [LAUGH] real life, all of the reactions within a cell would, would be

thermodynamic constraints and this, needs to happen in the mar,marvellous work.

What does this mean?

This means that one needs to ensure that there is mass conservation.

So, what is mass conservation?

[SOUND] Mass conservation means that, unless

there is synthesis or degradation processes.

The total mass of each component should remain constant during the simulations.

So, for enzymatic reactions for instance, the total amount of product can never be

greater than, the level of substrate, because how would the sub.

Product come from.

So similarly, in a multi-compartment model if, there is

movement of activated napconnies from the cytoplasm to the nucleus.

The total amount of mass in cytoplasme map, activated map.

Connies in the cytoplasm needs to decrease.

And it needs to go up in the nucleus, but the

total amount of map kind is in the cell remains constant.

The other important aspects thermodynamic

constraint is to ensure microscopic reversability.

2:29

For a system of coupled ODEs, the product of all of the forward rate

constants should be equal to the product of all the reverse rate constants.

And this sort of ensures, that the system is thermodinamically compliant.

And so microscopic reversibility sort of is an important, aspect of,

making sure that the system is really ther-, thermodinamically compliant.

Once we have taken care of the thermodynamic constraints,

we still need to worry about errors in simulations.

3:07

A major source of error is, comes from the time step in ODE models.

The time steps are large.

They are likely to be unacceptability large errors

in simulations, due to one error in integration.

And two, due to the error that arises from coupling in the system of coupled ODEs.

So, if one reaction is much faster than the other and as the

simulation proceeds the, the fast reaction is coupled to a slow one

updating the concentrations of the reactants can produce errors.

Hence the, these types of errors finals.

Results are unlikely to be correct or meaningful.

And the solution to most of these is to use, smaller

time steps to limit the computing errors to 1% or less.

So the type of the time steps, have to be by trial and error,

Settled upon, so we that get overall computing errors of less than 1%.

And this will then, allow us to compare, dial up

predictions and hypothesis that can be tested in many biochemical experiments.

Which have errors typically in the two to five percent range.

Of course when one increases, when one decreases the time stamp.

This increases the amount of computation power

needed in the time required for simulations.

Similarly, the size of elements, in PDE models is a major source of errors.

If the elements are very large.

When solving PDE models, it will be a greater error in

each step of computation, and the final result will not be correct.

The solution is to use progressively, smaller mesh sizes.

And this of course will increase the amount of computation.

So, basically, in running these simulations, there is a.

Trade-off between the accuracy of the results and the speed of the computations.

5:19

the, this problem is especially a cue

to when, one studies, the coupling of electrical.

And biochemical systems.

That is when a model contains reactions in very different time scales, then it

requires unrealistically small time steps to get acceptable levels of.

errors.

Such systems are called stiff, as they consider, are called stiff

and as they contain what equations, which are called stiff equations.

So, for example, the opening or closing of

a calcium channel may cut in the millisecond time

scale, while the calcium driven by a chemical reaction,

such as activation of protein kinases occur in the.

6:08

Seconds to minute time scale.

So, capturing both of these kinds of reactions in the same

system results in stiffness and the, so either one

has to do these, unrealistically large, unrealistically small time stepped.

Which means, the computation needs to run for a very, very long time.

You have to compute the system, evolution for a few minutes, or

a few seconds, or one, if one use is larger, sort of,

6:49

So reactions are presenting the calcium channels can be considered stiff and there

are special solvers in math lab, to solve these stiff systems.

So, we should be, so one of the things when setting up a model, is

that one needs to be aware of the relative time scales of the various reactions.

And so that one can select the appropriate solver

if one has a sort of stiffness in the model.

7:36

signalling from EGF receptors to MAPKind, through this feedback loop.

In building these models, we sort of do two kinds of constraints.

For instance for EGF stimulated map kind is activity.

Here is a comparison between the model and the, simulations.

The lines with the.

The with the, data points are the as experiments

and the lines without the data points, are the

8:08

are the simulations and there you can see that

they closely agree but they don't actually absolutely overlap.

You can actually see the clear difference here.

So, here this is the difference between the parameters we use in the model.

Economical, versus what is observed in real life in the cell.

So in our lab I typically don't encourage people to

sort of tweak the model to get a perfect match.

As long as, the system looks qualitatively similar as is seen here.

We accept the system as a reasonable facsimile of the eh,

the model as a reasonable facsimile of the eh, experimental system.

The lower K1, shows a much better effect of the fact of PLC gamma activation

and reduced concentrations of calcium with and without EGFR stimulation.

And so by fitting the, and so this fits the the the pa, this part of

the system, the EGF receptor to this part of the system the EGF receptor PLC gamma.

part.

And so one can see that, by constraining parts, of the system, the

to experimental data, the overall model can only

work, or react within the range at which.

The, the system, all their

individual pathways have been experimentally observed.

So, this is sort of constraining the modeled experimental data is a very

important step before the models, are actually used for simulations.

So, when one does all of this what can what can one get out of the system?

One can get, one get out of the

system are predictions from the, from these simulations.

So, as I've said many times before,

such predictions should be non-obvious and need computation.

So, for instance, in the previous slide or in

this slide, one can see that there is this.

Feedback loop here, which is a positive feedback loop.

And one can ask the question when does the feedback

loop become a positive feedback loop function as a switch?

Well, how, how much EGFI, do you need to

trigger the switch or of the level of the key.

Components of this system that will allow, such a switch to function.

10:43

Once one makes a prediction of what

are the key characteristics that make this positive

feedback loop a switch, one can then test

from the simulations the predictions and see whether.

That are meant to be very vol-, very viable.

In addition to kind of making these predictive simulations,

there is also a very useful compute to a useful

11:10

reason for care for creating these computational

models, which is to use them as

an accounting tool, where one can account for the various components of the system.

To interact and behave in a way, that the input,

output of the model, matches what is seen in the simulations.

This accounting for the relative activities is

of all of the components of this system.

Quite often.

The organization of large quantitative databases is as those, such as metabolic

networks can be used to produce these accounting type models.

And I, I should sort of caution, that one type of use does

not preclude the other, and so if one builds a good model with.

Enough details, the model should account for the known observations as well as

generate predictions of future events that can be experimentally tested.

So, the take-home points for this lecture are a.

Follows, right?

Most biological models are solved by numerical simulations, so it

is very important that the models use realistic and reasonable parameters.

Models should be well-constrained.

With experimental data, the source of errors minimized, so

that the simulations most accurately reflect

the, experimental system that is being studied by modelling.

The predictions from these models should be non-obvious.

And not not one can actually just into

it or mentally compute and require obvious computation.

And the predictions, from these

computations should be experimentally testable.

So, with this lecture, I will conclude, sort

of, the bottom of approaches and how, and

the dynamical modelling approaches that are sort of

Mm, used in sys, in current systems biology.

And any of you, if you have any questions

from lectures one, through six, please be sure to

bring them up in the discussion forum in addition

to the structure discussion points, so that we can resolve.

Any of these things that may or may not have been so clear about in my lectures.

Thank you.

[SOUND]