From cef446b9c99febc7ef7491e4aadc591dbc54be0d Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Fri, 5 Apr 2024 01:24:32 +0000 Subject: [PATCH] build based on cf21830 --- dev/.documenter-siteinfo.json | 2 +- dev/DeepBSDE/index.html | 2 +- dev/DeepSplitting/index.html | 2 +- dev/Feynman_Kac/index.html | 2 +- dev/MLP/index.html | 2 +- dev/NNKolmogorov/index.html | 2 +- dev/NNParamKolmogorov/index.html | 2 +- dev/NNStopping/index.html | 2 +- dev/assets/Manifest.toml | 78 +++++++++++----------- dev/getting_started/index.html | 8 +-- dev/index.html | 38 +++++------ dev/problems/index.html | 2 +- dev/tutorials/deepbsde/index.html | 2 +- dev/tutorials/deepsplitting/index.html | 2 +- dev/tutorials/mlp/index.html | 2 +- dev/tutorials/nnkolmogorov/index.html | 2 +- dev/tutorials/nnparamkolmogorov/index.html | 2 +- dev/tutorials/nnstopping/index.html | 2 +- 18 files changed, 77 insertions(+), 77 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 60f4020..308a768 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-29T01:24:00","documenter_version":"1.3.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-04-05T01:24:29","documenter_version":"1.3.0"}} \ No newline at end of file diff --git a/dev/DeepBSDE/index.html b/dev/DeepBSDE/index.html index 61098f2..27ad8e0 100644 --- a/dev/DeepBSDE/index.html +++ b/dev/DeepBSDE/index.html @@ -64,4 +64,4 @@ trajectories_lower, maxiters_limits ) -

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

+

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

diff --git a/dev/DeepSplitting/index.html b/dev/DeepSplitting/index.html index 835f73d..fd70870 100644 --- a/dev/DeepSplitting/index.html +++ b/dev/DeepSplitting/index.html @@ -20,4 +20,4 @@ cuda_device, verbose_rate ) -> PIDESolution{_A, _B, _C, Vector{_A1}, Vector{Any}, Nothing} where {_A, _B, _C, _A1} -

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

  • dt
  • batch_size
  • maxiters: the number of iterations for minimizing the loss function
  • abstol: the absolute tolerance for the loss function
  • use_cuda: if you have a Nvidia GPU, recommended.

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

+

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

  • dt
  • batch_size
  • maxiters: the number of iterations for minimizing the loss function
  • abstol: the absolute tolerance for the loss function
  • use_cuda: if you have a Nvidia GPU, recommended.

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

diff --git a/dev/Feynman_Kac/index.html b/dev/Feynman_Kac/index.html index 9a30c35..d8d995c 100644 --- a/dev/Feynman_Kac/index.html +++ b/dev/Feynman_Kac/index.html @@ -7,4 +7,4 @@ v(\tau, x) &= \int_{-\tau}^0 \mathbb{E} \left[ f(X^x_{s + \tau}, v(s + T, X^x_{s + \tau}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= - \int_{\tau}^0 \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= \int_{0}^\tau \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]. -\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

+\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

diff --git a/dev/MLP/index.html b/dev/MLP/index.html index 6798139..f81f466 100644 --- a/dev/MLP/index.html +++ b/dev/MLP/index.html @@ -16,4 +16,4 @@ u_L &= \sum_{l=1}^{L-1} \frac{1}{M^{L-l}}\sum_{i=1}^{M^{L-l}} \frac{1}{K}\sum_{j=1}^{K} \bigg[ f(X^{x,(l, i)}_{t - s_{(l, i)}}, Z^{(l,j)}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}), u(T-s_{l,i}, Z^{(l,j)})) + \\ &\qquad \mathbf{1}_\N(l) f(X^{x,(l, i)}_{t - s_{(l, i)}}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}))\bigg] + \frac{1}{M^{L}}\sum_i^{M^{L}} u(0, X^{x,(l, i)}_t)\\ -\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

  • K characterizes the number of samples for the Monte Carlo approximation of the last term.
  • mc_sample characterizes the distribution of the Z variables

References

+\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

  • K characterizes the number of samples for the Monte Carlo approximation of the last term.
  • mc_sample characterizes the distribution of the Z variables

References

diff --git a/dev/NNKolmogorov/index.html b/dev/NNKolmogorov/index.html index abea744..73cd452 100644 --- a/dev/NNKolmogorov/index.html +++ b/dev/NNKolmogorov/index.html @@ -14,4 +14,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

+

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

diff --git a/dev/NNParamKolmogorov/index.html b/dev/NNParamKolmogorov/index.html index 884906d..467f274 100644 --- a/dev/NNParamKolmogorov/index.html +++ b/dev/NNParamKolmogorov/index.html @@ -20,4 +20,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

+

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

diff --git a/dev/NNStopping/index.html b/dev/NNStopping/index.html index ae0ae28..93f877f 100644 --- a/dev/NNStopping/index.html +++ b/dev/NNStopping/index.html @@ -10,4 +10,4 @@ ensemblealg, kwargs... ) -> NamedTuple{(:payoff, :stopping_time), <:Tuple{Any, Any}} -

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where Ï„ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (Ï„) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

+

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where Ï„ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (Ï„) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

diff --git a/dev/assets/Manifest.toml b/dev/assets/Manifest.toml index 10426d3..15022ad 100644 --- a/dev/assets/Manifest.toml +++ b/dev/assets/Manifest.toml @@ -104,9 +104,9 @@ version = "7.9.0" [[deps.ArrayLayouts]] deps = ["FillArrays", "LinearAlgebra"] -git-tree-sha1 = "6404a564c24a994814106c374bec893195e19bac" +git-tree-sha1 = "0330bc3e828a05d1073553fb56f9695d73077370" uuid = "4c555306-a7a7-4459-81d9-ec55ddd5c99a" -version = "1.8.0" +version = "1.9.1" weakdeps = ["SparseArrays"] [deps.ArrayLayouts.extensions] @@ -123,9 +123,9 @@ version = "0.1.0" [[deps.BFloat16s]] deps = ["LinearAlgebra", "Printf", "Random", "Test"] -git-tree-sha1 = "dbf84058d0a8cbbadee18d25cf606934b22d7c66" +git-tree-sha1 = "2c7cc21e8678eff479978a0a2ef5ce2f51b63dff" uuid = "ab4f0b2a-ad5b-11e8-123f-65d77653426b" -version = "0.4.2" +version = "0.5.0" [[deps.BangBang]] deps = ["Compat", "ConstructionBase", "InitialValues", "LinearAlgebra", "Requires", "Setfield", "Tables"] @@ -367,10 +367,10 @@ uuid = "8bb1440f-4735-579b-a4ab-409b98df4dab" version = "1.9.1" [[deps.DiffEqBase]] -deps = ["ArrayInterface", "DataStructures", "DocStringExtensions", "EnumX", "EnzymeCore", "FastBroadcast", "ForwardDiff", "FunctionWrappers", "FunctionWrappersWrappers", "LinearAlgebra", "Logging", "Markdown", "MuladdMacro", "Parameters", "PreallocationTools", "PrecompileTools", "Printf", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "SparseArrays", "Static", "StaticArraysCore", "Statistics", "Tricks", "TruncatedStacktraces"] -git-tree-sha1 = "b19b2bb1ecd1271334e4b25d605e50f75e68fcae" +deps = ["ArrayInterface", "ConcreteStructs", "DataStructures", "DocStringExtensions", "EnumX", "EnzymeCore", "FastBroadcast", "FastClosures", "ForwardDiff", "FunctionWrappers", "FunctionWrappersWrappers", "LinearAlgebra", "Logging", "Markdown", "MuladdMacro", "Parameters", "PreallocationTools", "PrecompileTools", "Printf", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "Setfield", "SparseArrays", "Static", "StaticArraysCore", "Statistics", "Tricks", "TruncatedStacktraces"] +git-tree-sha1 = "4fa023dbb15b3485426bbc6c43e030c14250d664" uuid = "2b5f629d-d688-5b77-993f-72d75c75574e" -version = "6.148.0" +version = "6.149.0" [deps.DiffEqBase.extensions] DiffEqBaseChainRulesCoreExt = "ChainRulesCore" @@ -398,9 +398,9 @@ version = "6.148.0" [[deps.DiffEqCallbacks]] deps = ["DataStructures", "DiffEqBase", "ForwardDiff", "Functors", "LinearAlgebra", "Markdown", "NonlinearSolve", "Parameters", "RecipesBase", "RecursiveArrayTools", "SciMLBase", "StaticArraysCore"] -git-tree-sha1 = "e73f4d7e780cf78eea9f13dd6eaccb0ef3c6a241" +git-tree-sha1 = "2df0433103c89ee2dad56f4ef9c7755521464a39" uuid = "459566f4-90b8-5000-8ac3-15dfb0a30def" -version = "3.4.1" +version = "3.5.0" [deps.DiffEqCallbacks.weakdeps] OrdinaryDiffEq = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed" @@ -504,9 +504,9 @@ weakdeps = ["SpecialFunctions"] EnzymeSpecialFunctionsExt = "SpecialFunctions" [[deps.EnzymeCore]] -git-tree-sha1 = "59c44d8fbc651c0395d8a6eda64b05ce316f58b4" +git-tree-sha1 = "1bc328eec34ffd80357f84a84bb30e4374e9bd60" uuid = "f151be2c-9106-41f4-ab19-57ee4f262869" -version = "0.6.5" +version = "0.6.6" weakdeps = ["Adapt"] [deps.EnzymeCore.extensions] @@ -568,10 +568,10 @@ version = "2.0.2" uuid = "7b1f6079-737a-58dc-b8bc-7a2ca5c1b5ee" [[deps.FillArrays]] -deps = ["LinearAlgebra", "Random"] -git-tree-sha1 = "5b93957f6dcd33fc343044af3d48c215be2562f1" +deps = ["LinearAlgebra"] +git-tree-sha1 = "bfe82a708416cf00b73a3198db0859c82f741558" uuid = "1a297f60-69ca-5386-bcde-b61e274b549b" -version = "1.9.3" +version = "1.10.0" weakdeps = ["PDMats", "SparseArrays", "Statistics"] [deps.FillArrays.extensions] @@ -648,9 +648,9 @@ version = "0.1.3" [[deps.Functors]] deps = ["LinearAlgebra"] -git-tree-sha1 = "8ae30e786837ce0a24f5e2186938bf3251ab94b2" +git-tree-sha1 = "fa8d8fcfa6c38a9a7aa07233e35b3d9a39ec751a" uuid = "d9f16b24-f501-4c13-a1f2-28368ffc5196" -version = "0.4.8" +version = "0.4.9" [[deps.Future]] deps = ["Random"] @@ -688,9 +688,9 @@ version = "1.3.1" [[deps.Git_jll]] deps = ["Artifacts", "Expat_jll", "JLLWrappers", "LibCURL_jll", "Libdl", "Libiconv_jll", "OpenSSL_jll", "PCRE2_jll", "Zlib_jll"] -git-tree-sha1 = "12945451c5d0e2d0dca0724c3a8d6448b46bbdf9" +git-tree-sha1 = "d18fb8a1f3609361ebda9bf029b60fd0f120c809" uuid = "f8c6e375-362e-5223-8a59-34ff63f689eb" -version = "2.44.0+1" +version = "2.44.0+2" [[deps.Graphs]] deps = ["ArnoldiMethod", "Compat", "DataStructures", "Distributed", "Inflate", "LinearAlgebra", "Random", "SharedArrays", "SimpleTraits", "SparseArrays", "Statistics"] @@ -842,9 +842,9 @@ version = "0.9.5" [[deps.LLVM]] deps = ["CEnum", "LLVMExtra_jll", "Libdl", "Preferences", "Printf", "Requires", "Unicode"] -git-tree-sha1 = "ab01dde107f21aa76144d0771dccc08f152ccac7" +git-tree-sha1 = "839c82932db86740ae729779e610f07a1640be9a" uuid = "929cbde3-209d-540e-8aea-75f648917ca0" -version = "6.6.2" +version = "6.6.3" weakdeps = ["BFloat16s"] [deps.LLVM.extensions] @@ -879,9 +879,9 @@ version = "1.2.2" [[deps.LazyArrays]] deps = ["ArrayLayouts", "FillArrays", "LinearAlgebra", "MacroTools", "MatrixFactorizations", "SparseArrays"] -git-tree-sha1 = "9cfca23ab83b0dfac93cb1a1ef3331ab9fe596a5" +git-tree-sha1 = "af45931c321aafdb96a6e0b26e81124e1b390e4e" uuid = "5078a376-72f3-5289-bfd5-ec5146d43c02" -version = "1.8.3" +version = "1.9.0" weakdeps = ["StaticArrays"] [deps.LazyArrays.extensions] @@ -996,9 +996,9 @@ uuid = "56ddb016-857b-54e1-b83d-db4d58db5568" [[deps.LoopVectorization]] deps = ["ArrayInterface", "CPUSummary", "CloseOpenIntervals", "DocStringExtensions", "HostCPUFeatures", "IfElse", "LayoutPointers", "LinearAlgebra", "OffsetArrays", "PolyesterWeave", "PrecompileTools", "SIMDTypes", "SLEEFPirates", "Static", "StaticArrayInterface", "ThreadingUtilities", "UnPack", "VectorizationBase"] -git-tree-sha1 = "0f5648fbae0d015e3abe5867bca2b362f67a5894" +git-tree-sha1 = "a13f3be5d84b9c95465d743c82af0b094ef9c2e2" uuid = "bdcacae8-1622-11e9-2a5c-532679323890" -version = "0.12.166" +version = "0.12.169" weakdeps = ["ChainRulesCore", "ForwardDiff", "SpecialFunctions"] [deps.LoopVectorization.extensions] @@ -1051,9 +1051,9 @@ version = "2.1.0" [[deps.MaybeInplace]] deps = ["ArrayInterface", "LinearAlgebra", "MacroTools", "SparseArrays"] -git-tree-sha1 = "a85c6a98c9e5a2a7046bc1bb89f28a3241e1de4d" +git-tree-sha1 = "b1f2f92feb0bc201e91c155ef575bcc7d9cc3526" uuid = "bb5d69b7-63fc-4a16-80bd-7e42200c7bdb" -version = "0.1.1" +version = "0.1.2" [[deps.MbedTLS_jll]] deps = ["Artifacts", "Libdl"] @@ -1144,9 +1144,9 @@ version = "1.2.0" [[deps.NonlinearSolve]] deps = ["ADTypes", "ArrayInterface", "ConcreteStructs", "DiffEqBase", "FastBroadcast", "FastClosures", "FiniteDiff", "ForwardDiff", "LazyArrays", "LineSearches", "LinearAlgebra", "LinearSolve", "MaybeInplace", "PrecompileTools", "Preferences", "Printf", "RecursiveArrayTools", "Reexport", "SciMLBase", "SimpleNonlinearSolve", "SparseArrays", "SparseDiffTools", "StaticArraysCore", "TimerOutputs"] -git-tree-sha1 = "1638addfc31707aea26333ff822afcf9d2e6f7de" +git-tree-sha1 = "b9e12aa04c90a05d2aaded6f7c4d8b39e77751db" uuid = "8913a72c-1f9b-4ce2-8d82-65094dcecaec" -version = "3.8.3" +version = "3.9.1" [deps.NonlinearSolve.extensions] NonlinearSolveBandedMatricesExt = "BandedMatrices" @@ -1207,9 +1207,9 @@ version = "0.8.1+2" [[deps.OpenSSL_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "60e3045590bd104a16fefb12836c00c0ef8c7f8c" +git-tree-sha1 = "3da7367955dcc5c54c1ba4d402ccdc09a1a3e046" uuid = "458c3c95-2e84-50aa-8efc-19380b2a3a95" -version = "3.0.13+0" +version = "3.0.13+1" [[deps.OpenSpecFun_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Pkg"] @@ -1218,10 +1218,10 @@ uuid = "efe28fd5-8261-553b-a9e1-b2916fc3738e" version = "0.5.5+0" [[deps.Optim]] -deps = ["Compat", "FillArrays", "ForwardDiff", "LineSearches", "LinearAlgebra", "NLSolversBase", "NaNMath", "PackageExtensionCompat", "Parameters", "PositiveFactorizations", "Printf", "SparseArrays", "StatsBase"] -git-tree-sha1 = "d1223e69af90b6d26cea5b6f3b289b3148ba702c" +deps = ["Compat", "FillArrays", "ForwardDiff", "LineSearches", "LinearAlgebra", "NLSolversBase", "NaNMath", "Parameters", "PositiveFactorizations", "Printf", "SparseArrays", "StatsBase"] +git-tree-sha1 = "d9b79c4eed437421ac4285148fcadf42e0700e89" uuid = "429524aa-4258-5aef-a3af-852621145aeb" -version = "1.9.3" +version = "1.9.4" [deps.Optim.extensions] OptimMOIExt = "MathOptInterface" @@ -1288,9 +1288,9 @@ version = "0.4.4" [[deps.Polyester]] deps = ["ArrayInterface", "BitTwiddlingConvenienceFunctions", "CPUSummary", "IfElse", "ManualMemory", "PolyesterWeave", "Requires", "Static", "StaticArrayInterface", "StrideArraysCore", "ThreadingUtilities"] -git-tree-sha1 = "8df43bbe60029526dd628af7e9951f5af680d4d7" +git-tree-sha1 = "09f59c6dda37c7f73efddc5bdf6f92bc940eb484" uuid = "f517fe37-dbe3-4b94-8317-1923a5111588" -version = "0.7.10" +version = "0.7.12" [[deps.PolyesterWeave]] deps = ["BitTwiddlingConvenienceFunctions", "CPUSummary", "IfElse", "Static", "ThreadingUtilities"] @@ -1557,9 +1557,9 @@ version = "0.1.0" [[deps.SimpleNonlinearSolve]] deps = ["ADTypes", "ArrayInterface", "ConcreteStructs", "DiffEqBase", "DiffResults", "FastClosures", "FiniteDiff", "ForwardDiff", "LinearAlgebra", "MaybeInplace", "PrecompileTools", "Reexport", "SciMLBase", "StaticArraysCore"] -git-tree-sha1 = "a535ae5083708f59e75d5bb3042c36d1be9bc778" +git-tree-sha1 = "d4c17fc60bf5f8f2be02777c4836878f27ac7b9b" uuid = "727e6d20-b764-4bd8-a329-72de5adea6c7" -version = "1.6.0" +version = "1.7.0" [deps.SimpleNonlinearSolve.extensions] SimpleNonlinearSolveChainRulesCoreExt = "ChainRulesCore" @@ -1696,9 +1696,9 @@ version = "1.7.0" [[deps.StatsBase]] deps = ["DataAPI", "DataStructures", "LinearAlgebra", "LogExpFunctions", "Missings", "Printf", "Random", "SortingAlgorithms", "SparseArrays", "Statistics", "StatsAPI"] -git-tree-sha1 = "1d77abd07f617c4868c33d4f5b9e1dbb2643c9cf" +git-tree-sha1 = "5cf7606d6cef84b543b483848d4ae08ad9832b21" uuid = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91" -version = "0.34.2" +version = "0.34.3" [[deps.StatsFuns]] deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"] diff --git a/dev/getting_started/index.html b/dev/getting_started/index.html index d207666..50408f7 100644 --- a/dev/getting_started/index.html +++ b/dev/getting_started/index.html @@ -17,7 +17,7 @@ ## Solving with multiple threads sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 0.9682420316274496]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
+u(x,t): [1.0, 0.9725230095472358]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
 
 ## Definition of the problem
 d = 10 # dimension of the problem
@@ -35,7 +35,7 @@
 
 sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 1.2244347621507332]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
+u(x,t): [1.0, 1.2221972634837668]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
 using Flux # needed to define the neural network
 
 ## Definition of the problem
@@ -72,11 +72,11 @@
             maxiters = 1000,
             batch_size = 1000)
PIDESolution
 timespan: 0.0:0.09999988228082657:0.49999941140413284
-u(x,t): Float32[1.0, 0.9062339, 0.9416901, 0.9911105, 1.0241605, 1.0796193]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
+u(x,t): Float32[1.0, 0.8836641, 0.9410823, 0.9895599, 1.0240611, 1.0667423]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
             alg, 
             0.1, 
             verbose = true, 
             abstol = 2e-3,
             maxiters = 1000,
             batch_size = 1000,
-            use_cuda=true)
+ use_cuda=true) diff --git a/dev/index.html b/dev/index.html index 1fc0bfc..2ddd353 100644 --- a/dev/index.html +++ b/dev/index.html @@ -28,9 +28,9 @@ [dce04be8] ArgCheck v2.3.0 ⌅ [ec485272] ArnoldiMethod v0.2.0 [4fba245c] ArrayInterface v7.9.0 - [4c555306] ArrayLayouts v1.8.0 + [4c555306] ArrayLayouts v1.9.1 [a9b6321e] Atomix v0.1.0 -⌃ [ab4f0b2a] BFloat16s v0.4.2 + [ab4f0b2a] BFloat16s v0.5.0 ⌅ [198e06fe] BangBang v0.3.40 [9718e550] Baselet v0.1.1 [62783981] BitTwiddlingConvenienceFunctions v0.1.5 @@ -61,8 +61,8 @@ [e2d170a0] DataValueInterfaces v1.0.0 [244e2a9f] DefineSingletons v0.1.2 [8bb1440f] DelimitedFiles v1.9.1 - [2b5f629d] DiffEqBase v6.148.0 - [459566f4] DiffEqCallbacks v3.4.1 + [2b5f629d] DiffEqBase v6.149.0 + [459566f4] DiffEqCallbacks v3.5.0 [77a26b50] DiffEqNoiseProcess v5.21.0 [163ba53b] DiffResults v1.1.0 [b552c78f] DiffRules v1.15.1 @@ -74,7 +74,7 @@ [da5c29d0] EllipsisNotation v1.8.0 [4e289a0a] EnumX v1.0.4 [7da242da] Enzyme v0.11.20 -⌅ [f151be2c] EnzymeCore v0.6.5 +⌅ [f151be2c] EnzymeCore v0.6.6 [d4d017d3] ExponentialUtilities v1.26.1 [e2ba6199] ExprTools v0.1.10 [cc61a311] FLoops v0.2.1 @@ -82,7 +82,7 @@ [7034ab61] FastBroadcast v0.2.8 [9aa1b823] FastClosures v0.3.2 [29a986be] FastLapackInterface v2.0.2 - [1a297f60] FillArrays v1.9.3 + [1a297f60] FillArrays v1.10.0 [6a86dc24] FiniteDiff v2.23.0 [53c48c17] FixedPointNumbers v0.8.4 [587475ba] Flux v0.14.15 @@ -90,7 +90,7 @@ [f62d2435] FunctionProperties v0.1.2 [069b7b12] FunctionWrappers v1.1.3 [77dc65aa] FunctionWrappersWrappers v0.1.3 - [d9f16b24] Functors v0.4.8 + [d9f16b24] Functors v0.4.9 [0c68f7d7] GPUArrays v10.0.2 [46192b85] GPUArraysCore v0.1.6 ⌅ [61eb1bfa] GPUCompiler v0.25.0 @@ -117,24 +117,24 @@ [ef3ab10e] KLU v0.6.0 [63c18a36] KernelAbstractions v0.9.18 [ba0b0d4f] Krylov v0.9.5 - [929cbde3] LLVM v6.6.2 + [929cbde3] LLVM v6.6.3 [8b046642] LLVMLoopInfo v1.0.0 [b964fa9f] LaTeXStrings v1.3.1 [10f19ff3] LayoutPointers v0.1.15 [0e77f7df] LazilyInitializedFields v1.2.2 - [5078a376] LazyArrays v1.8.3 + [5078a376] LazyArrays v1.9.0 [2d8b4e74] LevyArea v1.0.0 [d3d80556] LineSearches v7.2.0 [7ed4a6bd] LinearSolve v2.28.0 [2ab3a3ac] LogExpFunctions v0.3.27 - [bdcacae8] LoopVectorization v0.12.166 + [bdcacae8] LoopVectorization v0.12.169 [d8e11817] MLStyle v0.4.17 [f1d291b0] MLUtils v0.4.4 [1914dd2f] MacroTools v0.5.13 [d125e4d3] ManualMemory v0.1.8 [d0879d2d] MarkdownAST v0.1.2 [a3b82374] MatrixFactorizations v2.1.0 - [bb5d69b7] MaybeInplace v0.1.1 + [bb5d69b7] MaybeInplace v0.1.2 ⌅ [128add7d] MicroCollections v0.1.4 [e1d29d7a] Missings v1.1.0 [46d2c3a1] MuladdMacro v0.2.4 @@ -144,11 +144,11 @@ [5da4648a] NVTX v0.3.4 [77ba4419] NaNMath v1.0.2 [71a1bf82] NameResolution v0.1.5 - [8913a72c] NonlinearSolve v3.8.3 + [8913a72c] NonlinearSolve v3.9.1 [d8793406] ObjectFile v0.4.1 [6fe1bfb0] OffsetArrays v1.13.0 [0b1bfda6] OneHotArrays v0.2.5 - [429524aa] Optim v1.9.3 + [429524aa] Optim v1.9.4 [3bd65402] Optimisers v0.3.2 [bac558e1] OrderedCollections v1.6.3 [1dea7af3] OrdinaryDiffEq v6.74.1 @@ -157,7 +157,7 @@ [d96e819e] Parameters v0.12.3 [69de0a69] Parsers v2.8.1 [e409e4f3] PoissonRandom v0.4.4 - [f517fe37] Polyester v0.7.10 + [f517fe37] Polyester v0.7.12 [1d0040c9] PolyesterWeave v0.2.1 [2dfb63ee] PooledArrays v1.4.3 [85a6dd25] PositiveFactorizations v0.2.4 @@ -191,7 +191,7 @@ [91c51154] SentinelArrays v1.4.1 [efcf1570] Setfield v1.1.1 [605ecd9f] ShowCases v0.1.0 - [727e6d20] SimpleNonlinearSolve v1.6.0 + [727e6d20] SimpleNonlinearSolve v1.7.0 [699a6c99] SimpleTraits v0.9.4 [ce78b400] SimpleUnPack v1.1.0 [a2af1166] SortingAlgorithms v1.2.1 @@ -205,7 +205,7 @@ [90137ffa] StaticArrays v1.9.3 [1e83bf80] StaticArraysCore v1.4.2 [82ae8749] StatsAPI v1.7.0 - [2913bbd2] StatsBase v0.34.2 + [2913bbd2] StatsBase v0.34.3 [4c63d2b9] StatsFuns v1.3.1 [789caeaf] StochasticDiffEq v6.65.1 [7792a7ef] StrideArraysCore v0.5.2 @@ -236,14 +236,14 @@ ⌅ [62b44479] CUDNN_jll v8.9.4+0 ⌅ [7cc45869] Enzyme_jll v0.0.102+0 [2e619515] Expat_jll v2.5.0+0 - [f8c6e375] Git_jll v2.44.0+1 + [f8c6e375] Git_jll v2.44.0+2 [1d5cc7b8] IntelOpenMP_jll v2024.0.2+0 [9c1d0b0a] JuliaNVTXCallbacks_jll v0.2.1+0 [dad2f222] LLVMExtra_jll v0.0.29+0 [94ce4f54] Libiconv_jll v1.17.0+0 [856f044c] MKL_jll v2024.0.0+0 [e98f9f5b] NVTX_jll v3.1.0+2 - [458c3c95] OpenSSL_jll v3.0.13+0 + [458c3c95] OpenSSL_jll v3.0.13+1 [efe28fd5] OpenSpecFun_jll v0.5.5+0 [f50d1b31] Rmath_jll v0.4.0+0 [0dad84c5] ArgTools v1.1.1 @@ -294,4 +294,4 @@ [8e850b90] libblastrampoline_jll v5.8.0+1 [8e850ede] nghttp2_jll v1.52.0+1 [3f19e933] p7zip_jll v17.4.0+2 -Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

+Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

diff --git a/dev/problems/index.html b/dev/problems/index.html index dd0c8d4..20fbfa8 100644 --- a/dev/problems/index.html +++ b/dev/problems/index.html @@ -31,4 +31,4 @@

Defines a Parabolic Partial Differential Equation of the form:

\[\begin{aligned} \frac{du}{dt} &= \tfrac{1}{2} \text{Tr}(\sigma \sigma^T) \Delta u(x, t) + \mu \nabla u(x, t) \\ &\quad + f(x, u(x, t), ( \nabla_x u )(x, t), p, t) -\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

+\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

diff --git a/dev/tutorials/deepbsde/index.html b/dev/tutorials/deepbsde/index.html index 1549b99..68a8847 100644 --- a/dev/tutorials/deepbsde/index.html +++ b/dev/tutorials/deepbsde/index.html @@ -67,4 +67,4 @@ Dense(hls,hls,relu), Dense(hls,d)) pdealg = NNPDENS(u0, σᵀ∇u, opt=opt)

And now we solve the PDE. Here, we say we want to solve the underlying neural SDE using the Euler-Maruyama SDE solver with our chosen dt=0.2, do at most 150 iterations of the optimizer, 100 SDE solves per loss evaluation (for averaging), and stop if the loss ever goes below 1f-6.

ans = solve(prob, pdealg, verbose=true, maxiters=150, trajectories=100,
-                            alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
+ alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
diff --git a/dev/tutorials/deepsplitting/index.html b/dev/tutorials/deepsplitting/index.html index 21076ba..230a5f8 100644 --- a/dev/tutorials/deepsplitting/index.html +++ b/dev/tutorials/deepsplitting/index.html @@ -41,4 +41,4 @@ abstol = 2e-3, maxiters = 1000, batch_size = 1000, - use_cuda=true) + use_cuda=true) diff --git a/dev/tutorials/mlp/index.html b/dev/tutorials/mlp/index.html index 76b6378..eed5d8b 100644 --- a/dev/tutorials/mlp/index.html +++ b/dev/tutorials/mlp/index.html @@ -31,4 +31,4 @@ ## Definition of the algorithm alg = MLP(mc_sample = mc_sample ) -sol = solve(prob, alg, multithreading=true) +sol = solve(prob, alg, multithreading=true) diff --git a/dev/tutorials/nnkolmogorov/index.html b/dev/tutorials/nnkolmogorov/index.html index 2c4f6b6..3c64893 100644 --- a/dev/tutorials/nnkolmogorov/index.html +++ b/dev/tutorials/nnkolmogorov/index.html @@ -25,4 +25,4 @@ alg = NNKolmogorov(m, opt) m = Chain(Dense(d, 16, elu), Dense(16, 32, elu), Dense(32, 16, elu), Dense(16, 1)) sol = solve(prob, alg, sdealg, verbose = true, dt = 0.01, - dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) + dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) diff --git a/dev/tutorials/nnparamkolmogorov/index.html b/dev/tutorials/nnparamkolmogorov/index.html index ef3e92f..9a091f0 100644 --- a/dev/tutorials/nnparamkolmogorov/index.html +++ b/dev/tutorials/nnparamkolmogorov/index.html @@ -43,4 +43,4 @@ p_sigma_test = rand(p_domain.p_sigma[1]:dps.p_sigma:p_domain.p_sigma[2], 1, 1) t_test = rand(tspan[1]:dt:tspan[2], 1, 1) p_mu_test = nothing -p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
+p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
diff --git a/dev/tutorials/nnstopping/index.html b/dev/tutorials/nnstopping/index.html index dd42313..8c4febe 100644 --- a/dev/tutorials/nnstopping/index.html +++ b/dev/tutorials/nnstopping/index.html @@ -21,4 +21,4 @@ for i in 1:N]
Note

The number of models should be equal to the time discritization.

And finally we define our optimizer and algorithm, and call solve:

opt = Flux.Optimisers.Adam(0.01)
 alg = NNStopping(models, opt)
 
-sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)
+sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)