diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 0cf10b3..79b0ae1 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.3","generation_timestamp":"2024-05-31T01:24:47","documenter_version":"1.4.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-06-07T01:25:16","documenter_version":"1.4.1"}} \ No newline at end of file diff --git a/dev/DeepBSDE/index.html b/dev/DeepBSDE/index.html index ea796c5..5dd7a98 100644 --- a/dev/DeepBSDE/index.html +++ b/dev/DeepBSDE/index.html @@ -64,4 +64,4 @@ trajectories_lower, maxiters_limits ) -

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

+

Returns a PIDESolution object.

Arguments:

To use SDE Algorithms use DeepBSDE

source

The general idea 💡

The DeepBSDE algorithm is similar in essence to the DeepSplitting algorithm, with the difference that it uses two neural networks to approximate both the the solution and its gradient.

References

diff --git a/dev/DeepSplitting/index.html b/dev/DeepSplitting/index.html index 8b016c3..c47766d 100644 --- a/dev/DeepSplitting/index.html +++ b/dev/DeepSplitting/index.html @@ -20,4 +20,4 @@ cuda_device, verbose_rate ) -> PIDESolution{_A, _B, _C, Vector{_A1}, Vector{Any}, Nothing} where {_A, _B, _C, _A1} -

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

+

Returns a PIDESolution object.

Arguments

source

The DeepSplitting algorithm reformulates the PDE as a stochastic learning problem.

The algorithm relies on two main ideas:

The general idea 💡

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x)) \tag{1}\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$.

Local Feynman Kac formula

DeepSplitting solves the PDE iteratively over small time intervals by using an approximate Feynman-Kac representation locally.

More specifically, considering a small time step $dt = t_{n+1} - t_n$ one has that

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \mathbb{E} \left[ f(t, X_{T - t_{n}}, u(t_{n},X_{T - t_{n}}))(t_{n+1} - t_n) + u(t_{n}, X_{T - t_{n}}) | X_{T - t_{n+1}}\right] \tag{3}.\]

One can therefore use Monte Carlo integrations to approximate the expectations

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \frac{1}{\text{batch\_size}}\sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + (t_{n+1} - t_n)\sum_{k=1}^{K} \big[ f(t_n, X_{T - t_{n}}^{(j)}, u(t_{n},X_{T - t_{n}}^{(j)})) \big] \right]\]

Reformulation as a learning problem

The DeepSplitting algorithm approximates $u(t_{n+1}, x)$ by a parametric function ${\bf u}^\theta_n(x)$. It is advised to let this function be a neural network ${\bf u}_\theta \equiv NN_\theta$ as they are universal approximators.

For each time step $t_n$, the DeepSplitting algorithm

  1. Generates the particle trajectories $X^{x, (j)}$ satisfying Eq. (2) over the whole interval $[0,T]$.

  2. Seeks ${\bf u}_{n+1}^{\theta}$ by minimizing the loss function

\[L(\theta) = ||{\bf u}^\theta_{n+1}(X_{T - t_n}) - \left[ f(t, X_{T - t_{n-1}}, {\bf u}_{n-1}(X_{T - t_{n-1}}))(t_{n} - t_{n-1}) + {\bf u}_{n-1}(X_{T - t_{n-1}}) \right] ||\]

This way, the PDE approximation problem is decomposed into a sequence of separate learning problems. In HighDimPDE.jl the right parameter combination $\theta$ is found by iteratively minimizing $L$ using stochastic gradient descent.

Tip

To solve with DeepSplitting, one needs to provide to solve

Solving point-wise or on a hypercube

Pointwise

DeepSplitting allows obtaining $u(t,x)$ on a single point $x \in \Omega$ with the keyword $x$.

prob = PIDEProblem(μ, σ, x, tspan, g, f,)

Hypercube

Yet more generally, one wants to solve Eq. (1) on a $d$-dimensional cube $[a,b]^d$. This is offered by HighDimPDE.jl with the keyword x0_sample.

prob = PIDEProblem(μ, σ, x, tspan, g, f; x0_sample = x0_sample)

Internally, this is handled by assigning a random variable as the initial point of the particles, i.e.

\[X_t^\xi = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + \xi,\]

where $\xi$ a random variable uniformly distributed over $[a,b]^d$. This way, the neural network is trained on the whole interval $[a,b]^d$ instead of a single point.

Non-local PDEs

DeepSplitting can solve for non-local reaction diffusion equations of the type

\[\partial_t u = \mu(x) \nabla_x u + \frac{1}{2} \sigma^2(x) \Delta u + \int_{\Omega}f(x,y, u(t,x), u(t,y))dy\]

The non-localness is handled by a Monte Carlo integration.

\[u(t_{n+1}, X_{T - t_{n+1}}) \approx \sum_{j=1}^{\text{batch\_size}} \left[ u(t_{n}, X_{T - t_{n}}^{(j)}) + \frac{(t_{n+1} - t_n)}{K}\sum_{k=1}^{K} \big[ f(t, X_{T - t_{n}}^{(j)}, Y_{X_{T - t_{n}}^{(j)}}^{(k)}, u(t_{n},X_{T - t_{n}}^{(j)}), u(t_{n},Y_{X_{T - t_{n}}^{(j)}}^{(k)})) \big] \right]\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

alg = DeepSplitting(nn, opt = opt, mc_sample = mc_sample, K = 1)

mc_sample can be whether UniformSampling(a, b) or NormalSampling(σ_sampling, shifted).

References

diff --git a/dev/Feynman_Kac/index.html b/dev/Feynman_Kac/index.html index 17bb963..cff14cd 100644 --- a/dev/Feynman_Kac/index.html +++ b/dev/Feynman_Kac/index.html @@ -7,4 +7,4 @@ v(\tau, x) &= \int_{-\tau}^0 \mathbb{E} \left[ f(X^x_{s + \tau}, v(s + T, X^x_{s + \tau}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= - \int_{\tau}^0 \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]\\ &= \int_{0}^\tau \mathbb{E} \left[ f(X^x_{\tau - s}, v(T-s, X^x_{\tau - s}))ds \right] + \mathbb{E} \left[ v(0, X^x_{\tau}) \right]. -\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

+\end{aligned}\]

This leads to the

Non-linear Feynman Kac for initial value problems

Consider the PDE

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) + f(x, u(t,x))\]

with initial conditions $u(0, x) = g(x)$, where $u \colon \R^d \to \R$. Then

\[u(t, x) = \int_0^t \mathbb{E} \left[ f(X^x_{t - s}, u(T-s, X^x_{t - s}))ds \right] + \mathbb{E} \left[ u(0, X^x_t) \right] \tag{3}\]

with

\[X_t^x = \int_0^t \mu(X_s^x)ds + \int_0^t\sigma(X_s^x)dB_s + x.\]

diff --git a/dev/MLP/index.html b/dev/MLP/index.html index 7a642d3..c21bcf4 100644 --- a/dev/MLP/index.html +++ b/dev/MLP/index.html @@ -16,4 +16,4 @@ u_L &= \sum_{l=1}^{L-1} \frac{1}{M^{L-l}}\sum_{i=1}^{M^{L-l}} \frac{1}{K}\sum_{j=1}^{K} \bigg[ f(X^{x,(l, i)}_{t - s_{(l, i)}}, Z^{(l,j)}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}), u(T-s_{l,i}, Z^{(l,j)})) + \\ &\qquad \mathbf{1}_\N(l) f(X^{x,(l, i)}_{t - s_{(l, i)}}, u(T-s_{(l, i)}, X^{x,(l, i)}_{t - s_{(l, i)}}))\bigg] + \frac{1}{M^{L}}\sum_i^{M^{L}} u(0, X^{x,(l, i)}_t)\\ -\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

References

+\end{aligned}\]

Tip

In practice, if you have a non-local model, you need to provide the sampling method and the number $K$ of MC integration through the keywords mc_sample and K.

References

diff --git a/dev/NNKolmogorov/index.html b/dev/NNKolmogorov/index.html index 40bbfae..c435b29 100644 --- a/dev/NNKolmogorov/index.html +++ b/dev/NNKolmogorov/index.html @@ -14,4 +14,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

+

Returns a PIDESolution object.

Arguments

source

NNKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with initial condition given by g(x)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x)]\]

diff --git a/dev/NNParamKolmogorov/index.html b/dev/NNParamKolmogorov/index.html index 7626b87..a40f2a8 100644 --- a/dev/NNParamKolmogorov/index.html +++ b/dev/NNParamKolmogorov/index.html @@ -20,4 +20,4 @@ dx, kwargs... ) -

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

+

Returns a PIDESolution object.

Arguments

source

NNParamKolmogorov obtains a

\[\partial_t u(t,x) = \mu(t, x, γ_mu) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x, γ_sigma) \Delta_x u(t,x)\]

with initial condition given by g(x, γ_phi)

\[\partial_t u(t,x) = - \mu(t, x) \nabla_x u(t,x) - \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x)\]

with terminal condition given by g(x, γ_phi)

We can use the Feynman-Kac formula :

\[S_t^x = \int_{0}^{t}\mu(S_s^x)ds + \int_{0}^{t}\sigma(S_s^x)dB_s\]

And the solution is given by:

\[f(T, x) = \mathbb{E}[g(S_T^x, γ_phi)]\]

diff --git a/dev/NNStopping/index.html b/dev/NNStopping/index.html index ef3a6bb..fb74c6d 100644 --- a/dev/NNStopping/index.html +++ b/dev/NNStopping/index.html @@ -10,4 +10,4 @@ ensemblealg, kwargs... ) -> NamedTuple{(:payoff, :stopping_time), <:Tuple{Any, Any}} -

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where τ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (τ) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

+

Returns a NamedTuple with payoff and stopping_time

Arguments:

source

The general idea 💡

Similar to DeepSplitting and DeepBSDE, NNStopping evaluates the PDE as a Stochastic Differential Equation. Consider an Obstacle PDE of the form:

\[ max\lbrace\partial_t u(t,x) + \mu(t, x) \nabla_x u(t,x) + \frac{1}{2} \sigma^2(t, x) \Delta_x u(t,x) , g(t,x) - u(t,x)\rbrace\]

Such PDEs are commonly used as representations for the dynamics of stock prices that can be exercised before maturity, such as American Options.

Using the Feynman-Kac formula, the underlying SDE will be:

\[dX_{t}=\mu(X,t)dt + \sigma(X,t)\ dW_{t}^{Q}\]

The payoff of the option would then be:

\[sup\lbrace\mathbb{E}[g(X_\tau, \tau)]\rbrace\]

Where τ is the stopping (exercising) time. The goal is to retrieve both the optimal exercising strategy (τ) and the payoff.

We approximate each stopping decision with a neural network architecture, inorder to maximise the expected payoff.

diff --git a/dev/assets/Manifest.toml b/dev/assets/Manifest.toml index b4537c0..8afd00f 100644 --- a/dev/assets/Manifest.toml +++ b/dev/assets/Manifest.toml @@ -1,6 +1,6 @@ # This file is machine-generated - editing it directly is not advised -julia_version = "1.10.3" +julia_version = "1.10.4" manifest_format = "2.0" project_hash = "655e215c4addc5f8a90c4adfdcef30f240ee3d6b" @@ -89,9 +89,9 @@ version = "0.4.0" [[deps.ArrayInterface]] deps = ["Adapt", "LinearAlgebra", "SparseArrays", "SuiteSparse"] -git-tree-sha1 = "133a240faec6e074e07c31ee75619c90544179cf" +git-tree-sha1 = "ed2ec3c9b483842ae59cd273834e5b46206d6dda" uuid = "4fba245c-0d91-5ea0-9b3e-6abc04ee57a9" -version = "7.10.0" +version = "7.11.0" [deps.ArrayInterface.extensions] ArrayInterfaceBandedMatricesExt = "BandedMatrices" @@ -234,15 +234,15 @@ version = "0.3.13" [[deps.ChainRules]] deps = ["Adapt", "ChainRulesCore", "Compat", "Distributed", "GPUArraysCore", "IrrationalConstants", "LinearAlgebra", "Random", "RealDot", "SparseArrays", "SparseInverseSubset", "Statistics", "StructArrays", "SuiteSparse"] -git-tree-sha1 = "291821c1251486504f6bae435227907d734e94d2" +git-tree-sha1 = "5ec157747036038ec70b250f578362268f0472f1" uuid = "082447d4-558c-5d27-93f4-14fc19e9eca2" -version = "1.66.0" +version = "1.68.0" [[deps.ChainRulesCore]] deps = ["Compat", "LinearAlgebra"] -git-tree-sha1 = "575cd02e080939a33b6df6c5853d14924c08e35b" +git-tree-sha1 = "71acdbf594aab5bbb2cec89b208c41b4c411e49f" uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4" -version = "1.23.0" +version = "1.24.0" weakdeps = ["SparseArrays"] [deps.ChainRulesCore.extensions] @@ -547,9 +547,9 @@ version = "1.0.4" [[deps.Enzyme]] deps = ["CEnum", "EnzymeCore", "Enzyme_jll", "GPUCompiler", "LLVM", "Libdl", "LinearAlgebra", "ObjectFile", "Preferences", "Printf", "Random"] -git-tree-sha1 = "0ed40abf91d84e02ee2f11eeb6552a823548d49f" +git-tree-sha1 = "c8dfc251413d4452e52974344f1c88a38a93f189" uuid = "7da242da-08ed-463a-9acd-ee780be4f1d9" -version = "0.12.9" +version = "0.12.10" weakdeps = ["ChainRulesCore", "SpecialFunctions", "StaticArrays"] [deps.Enzyme.extensions] @@ -568,9 +568,9 @@ weakdeps = ["Adapt"] [[deps.Enzyme_jll]] deps = ["Artifacts", "JLLWrappers", "LazyArtifacts", "Libdl", "TOML"] -git-tree-sha1 = "366b02be78e08daf2e8d86b680568b6a14990ff7" +git-tree-sha1 = "a1ace5737c6c6fed5877a1980c0b8d873d1d2be7" uuid = "7cc45869-7501-5eee-bdea-0790c847d4ef" -version = "0.0.117+0" +version = "0.0.119+0" [[deps.Expat_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] @@ -603,9 +603,9 @@ version = "0.1.1" [[deps.FastBroadcast]] deps = ["ArrayInterface", "LinearAlgebra", "Polyester", "Static", "StaticArrayInterface", "StrideArraysCore"] -git-tree-sha1 = "edad9f7040f1d3b6e8c023b1e29ebe417c25bc55" +git-tree-sha1 = "e17367f052035620d832499496080f792fa7ea47" uuid = "7034ab61-46d4-4ed7-9d0f-46aef9175898" -version = "0.3.1" +version = "0.3.2" [[deps.FastClosures]] git-tree-sha1 = "acebe244d53ee1b461970f8910c235b259e772ef" @@ -702,9 +702,9 @@ version = "0.1.3" [[deps.Functors]] deps = ["LinearAlgebra"] -git-tree-sha1 = "d3e63d9fa13f8eaa2f06f64949e2afc593ff52c2" +git-tree-sha1 = "8a66c07630d6428eaab3506a0eabfcf4a9edea05" uuid = "d9f16b24-f501-4c13-a1f2-28368ffc5196" -version = "0.4.10" +version = "0.4.11" [[deps.Future]] deps = ["Random"] @@ -896,9 +896,9 @@ version = "0.9.6" [[deps.LLVM]] deps = ["CEnum", "LLVMExtra_jll", "Libdl", "Preferences", "Printf", "Requires", "Unicode"] -git-tree-sha1 = "065c36f95709dd4a676dc6839a35d6fa6f192f24" +git-tree-sha1 = "389aea28d882a40b5e1747069af71bdbd47a1cae" uuid = "929cbde3-209d-540e-8aea-75f648917ca0" -version = "7.1.0" +version = "7.2.1" weakdeps = ["BFloat16s"] [deps.LLVM.extensions] @@ -933,9 +933,9 @@ version = "1.2.2" [[deps.LazyArrays]] deps = ["ArrayLayouts", "FillArrays", "LinearAlgebra", "MacroTools", "SparseArrays"] -git-tree-sha1 = "1567f3b9c49a8249c0921a6c29c3caddecf77383" +git-tree-sha1 = "899d44fa1a575653df5721a7fccb4988f7f09b62" uuid = "5078a376-72f3-5289-bfd5-ec5146d43c02" -version = "2.0.2" +version = "2.0.4" [deps.LazyArrays.extensions] LazyArraysBandedMatricesExt = "BandedMatrices" @@ -1041,9 +1041,9 @@ version = "2.30.1" [[deps.LogExpFunctions]] deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"] -git-tree-sha1 = "18144f3e9cbe9b15b070288eef858f71b291ce37" +git-tree-sha1 = "a2d09619db4e765091ee5c6ffe8872849de0feea" uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688" -version = "0.3.27" +version = "0.3.28" [deps.LogExpFunctions.extensions] LogExpFunctionsChainRulesCoreExt = "ChainRulesCore" @@ -1202,9 +1202,9 @@ version = "1.2.0" [[deps.NonlinearSolve]] deps = ["ADTypes", "ArrayInterface", "ConcreteStructs", "DiffEqBase", "FastBroadcast", "FastClosures", "FiniteDiff", "ForwardDiff", "LazyArrays", "LineSearches", "LinearAlgebra", "LinearSolve", "MaybeInplace", "PrecompileTools", "Preferences", "Printf", "RecursiveArrayTools", "Reexport", "SciMLBase", "SimpleNonlinearSolve", "SparseArrays", "SparseDiffTools", "StaticArraysCore", "SymbolicIndexingInterface", "TimerOutputs"] -git-tree-sha1 = "a5bc9c06e28108e04de0485273f0b5933cec66ed" +git-tree-sha1 = "ed5500c66b726ec9fe8c1796c0a600353246ecf8" uuid = "8913a72c-1f9b-4ce2-8d82-65094dcecaec" -version = "3.12.3" +version = "3.12.4" [deps.NonlinearSolve.extensions] NonlinearSolveBandedMatricesExt = "BandedMatrices" @@ -1300,9 +1300,9 @@ version = "1.6.3" [[deps.OrdinaryDiffEq]] deps = ["ADTypes", "Adapt", "ArrayInterface", "DataStructures", "DiffEqBase", "DocStringExtensions", "EnumX", "ExponentialUtilities", "FastBroadcast", "FastClosures", "FillArrays", "FiniteDiff", "ForwardDiff", "FunctionWrappersWrappers", "IfElse", "InteractiveUtils", "LineSearches", "LinearAlgebra", "LinearSolve", "Logging", "MacroTools", "MuladdMacro", "NonlinearSolve", "Polyester", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "SciMLOperators", "SciMLStructures", "SimpleNonlinearSolve", "SimpleUnPack", "SparseArrays", "SparseDiffTools", "StaticArrayInterface", "StaticArrays", "TruncatedStacktraces"] -git-tree-sha1 = "75b0d2bf28d0df92931919004a5be5304c38cca2" +git-tree-sha1 = "78486623c0b7f6779beafadf2a00a095b4b687ef" uuid = "1dea7af3-3e70-54e6-95c3-0bf5283fa5ed" -version = "6.80.1" +version = "6.81.1" [[deps.PCRE2_jll]] deps = ["Artifacts", "Libdl"] @@ -1370,9 +1370,9 @@ version = "0.2.4" [[deps.PreallocationTools]] deps = ["Adapt", "ArrayInterface", "ForwardDiff"] -git-tree-sha1 = "a660e9daab5db07adf3dedfe09b435cc530d855e" +git-tree-sha1 = "406c29a7f46706d379a3bce45671b4e3a39ddfbc" uuid = "d236fae5-4411-538c-8e31-a6e3d9e00b46" -version = "0.4.21" +version = "0.4.22" weakdeps = ["ReverseDiff"] [deps.PreallocationTools.extensions] @@ -1456,9 +1456,9 @@ version = "1.3.4" [[deps.RecursiveArrayTools]] deps = ["Adapt", "ArrayInterface", "DocStringExtensions", "GPUArraysCore", "IteratorInterfaceExtensions", "LinearAlgebra", "RecipesBase", "SparseArrays", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables"] -git-tree-sha1 = "d0f8d22294f932efb1617d669aff73a5c97d38ff" +git-tree-sha1 = "2cea01606a852c2431ded77293eb533b511b19e6" uuid = "731186ca-8d62-57ce-b412-fbd966d074cd" -version = "3.20.0" +version = "3.22.0" [deps.RecursiveArrayTools.extensions] RecursiveArrayToolsFastBroadcastExt = "FastBroadcast" @@ -1547,10 +1547,10 @@ uuid = "476501e8-09a2-5ece-8869-fb82de89a1fa" version = "0.6.42" [[deps.SciMLBase]] -deps = ["ADTypes", "ArrayInterface", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "PrecompileTools", "Preferences", "Printf", "RecipesBase", "RecursiveArrayTools", "Reexport", "RuntimeGeneratedFunctions", "SciMLOperators", "SciMLStructures", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables"] -git-tree-sha1 = "9f59654e2a85017ee27b0f59c7fac5a57aa10ced" +deps = ["ADTypes", "Accessors", "ArrayInterface", "CommonSolve", "ConstructionBase", "Distributed", "DocStringExtensions", "EnumX", "FunctionWrappersWrappers", "IteratorInterfaceExtensions", "LinearAlgebra", "Logging", "Markdown", "PrecompileTools", "Preferences", "Printf", "RecipesBase", "RecursiveArrayTools", "Reexport", "RuntimeGeneratedFunctions", "SciMLOperators", "SciMLStructures", "StaticArraysCore", "Statistics", "SymbolicIndexingInterface", "Tables"] +git-tree-sha1 = "1d1d1ff37d2917cad263fa186cbc19ce4b587ccf" uuid = "0bca4576-84f4-4d90-8ffe-ffa030f20462" -version = "2.39.0" +version = "2.40.0" [deps.SciMLBase.extensions] SciMLBaseChainRulesCoreExt = "ChainRulesCore" @@ -1729,9 +1729,9 @@ weakdeps = ["OffsetArrays", "StaticArrays"] [[deps.StaticArrays]] deps = ["LinearAlgebra", "PrecompileTools", "Random", "StaticArraysCore"] -git-tree-sha1 = "9ae599cd7529cfce7fea36cf00a62cfc56f0f37c" +git-tree-sha1 = "6e00379a24597be4ae1ee6b2d882e15392040132" uuid = "90137ffa-7385-5640-81b9-e52037218182" -version = "1.9.4" +version = "1.9.5" weakdeps = ["ChainRulesCore", "Statistics"] [deps.StaticArrays.extensions] @@ -1872,9 +1872,9 @@ weakdeps = ["PDMats"] TrackerPDMatsExt = "PDMats" [[deps.TranscodingStreams]] -git-tree-sha1 = "5d54d076465da49d6746c647022f3b3674e64156" +git-tree-sha1 = "a947ea21087caba0a798c5e494d0bb78e3a1a3a0" uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa" -version = "0.10.8" +version = "0.10.9" weakdeps = ["Random", "Test"] [deps.TranscodingStreams.extensions] diff --git a/dev/getting_started/index.html b/dev/getting_started/index.html index 8a3d548..2c3033b 100644 --- a/dev/getting_started/index.html +++ b/dev/getting_started/index.html @@ -17,7 +17,7 @@ ## Solving with multiple threads sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 0.9684841844257668]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
+u(x,t): [1.0, 0.9693189997153481]

Non-local PDE with Neumann boundary conditions

Let's include in the previous equation non-local competition, i.e.

\[\partial_t u = u (1 - \int_\Omega u(t,y)dy) + \frac{1}{2}\sigma^2\Delta_xu \tag{2}\]

where $\Omega = [-1/2, 1/2]^d$, and let's assume Neumann Boundary condition on $\Omega$.

using HighDimPDE
 
 ## Definition of the problem
 d = 10 # dimension of the problem
@@ -35,7 +35,7 @@
 
 sol = solve(prob, alg, multithreading = true)
PIDESolution
 timespan: [0.0, 0.5]
-u(x,t): [1.0, 1.2241793034848043]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
+u(x,t): [1.0, 1.2263063190897845]

DeepSplitting

Let's solve the previous equation with DeepSplitting.

using HighDimPDE
 using Flux # needed to define the neural network
 
 ## Definition of the problem
@@ -72,11 +72,11 @@
             maxiters = 1000,
             batch_size = 1000)
PIDESolution
 timespan: 0.0:0.09999988228082657:0.49999941140413284
-u(x,t): Float32[1.0, 0.91459906, 0.9576605, 1.0062392, 1.0516887, 1.0929184]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
+u(x,t): Float32[1.0, 0.8997626, 0.9463083, 0.9875963, 1.039739, 1.0821241]

Solving on the GPU

DeepSplitting can run on the GPU for (much) improved performance. To do so, just set use_cuda = true.

sol = solve(prob, 
             alg, 
             0.1, 
             verbose = true, 
             abstol = 2e-3,
             maxiters = 1000,
             batch_size = 1000,
-            use_cuda=true)
+ use_cuda=true) diff --git a/dev/index.html b/dev/index.html index 296a0b9..ebfb1b2 100644 --- a/dev/index.html +++ b/dev/index.html @@ -8,8 +8,8 @@ \end{aligned}\]

where $u \colon [0,T] \times \Omega \to \R$, $\Omega \subseteq \R^d$ is subject to initial and boundary conditions, and where $d$ is large.

Note

The difference between the two problems is that in Partial Integro Differential Equations, the integrand is integrated over x, while in Parabolic Integro Differential Equations, the function f is just evaluated for x.

HighDimPDE.jl implements solver algorithms that break down the curse of dimensionality, including

To make the most out of HighDimPDE.jl, we advise to first have a look at the

as all solver algorithms heavily rely on it.

Algorithm overview


FeaturesDeepSplittingMLPDeepBSDE
Time discretization free
Mesh-free
Single point $x \in \R^d$ approximation
$d$-dimensional cube $[a,b]^d$ approximation✔️
GPU
Gradient non-linearities✔️

✔️ : will be supported in the future

Reproducibility

The documentation of this SciML package was built using these direct dependencies,
Status `~/work/HighDimPDE.jl/HighDimPDE.jl/docs/Project.toml`
   [e30172f5] Documenter v1.4.1
   [587475ba] Flux v0.14.15
-  [57c578d5] HighDimPDE v2.0.0 `~/work/HighDimPDE.jl/HighDimPDE.jl`
and using this machine and Julia version.
Julia Version 1.10.3
-Commit 0b4590a5507 (2024-04-30 10:59 UTC)
+  [57c578d5] HighDimPDE v2.0.0 `~/work/HighDimPDE.jl/HighDimPDE.jl`
and using this machine and Julia version.
Julia Version 1.10.4
+Commit 48d4fd48430 (2024-06-04 10:41 UTC)
 Build Info:
   Official https://julialang.org/ release
 Platform Info:
@@ -28,7 +28,7 @@
   [66dad0bd] AliasTables v1.1.3
   [dce04be8] ArgCheck v2.3.0
   [ec485272] ArnoldiMethod v0.4.0
-  [4fba245c] ArrayInterface v7.10.0
+  [4fba245c] ArrayInterface v7.11.0
   [4c555306] ArrayLayouts v1.9.3
   [a9b6321e] Atomix v0.1.0
   [ab4f0b2a] BFloat16s v0.5.0
@@ -41,8 +41,8 @@
   [1af6417a] CUDA_Runtime_Discovery v0.3.3
   [49dc2e85] Calculus v0.5.1
   [7057c7e9] Cassette v0.3.13
-  [082447d4] ChainRules v1.66.0
-  [d360d2e6] ChainRulesCore v1.23.0
+  [082447d4] ChainRules v1.68.0
+  [d360d2e6] ChainRulesCore v1.24.0
   [fb6a15b2] CloseOpenIntervals v0.1.12
   [944b1d66] CodecZlib v0.7.4
   [3da002f7] ColorTypes v0.11.5
@@ -75,13 +75,13 @@
   [fa6b7ba4] DualNumbers v0.6.8
   [da5c29d0] EllipsisNotation v1.8.0
   [4e289a0a] EnumX v1.0.4
-  [7da242da] Enzyme v0.12.9
+  [7da242da] Enzyme v0.12.10
   [f151be2c] EnzymeCore v0.7.3
   [d4d017d3] ExponentialUtilities v1.26.1
   [e2ba6199] ExprTools v0.1.10
   [cc61a311] FLoops v0.2.1
   [b9860ae5] FLoopsBase v0.1.1
-  [7034ab61] FastBroadcast v0.3.1
+  [7034ab61] FastBroadcast v0.3.2
   [9aa1b823] FastClosures v0.3.2
   [29a986be] FastLapackInterface v2.0.4
   [1a297f60] FillArrays v1.11.0
@@ -92,7 +92,7 @@
   [f62d2435] FunctionProperties v0.1.2
   [069b7b12] FunctionWrappers v1.1.3
   [77dc65aa] FunctionWrappersWrappers v0.1.3
-  [d9f16b24] Functors v0.4.10
+  [d9f16b24] Functors v0.4.11
   [0c68f7d7] GPUArrays v10.1.1
   [46192b85] GPUArraysCore v0.1.6
   [61eb1bfa] GPUCompiler v0.26.5
@@ -119,16 +119,16 @@
   [ef3ab10e] KLU v0.6.0
   [63c18a36] KernelAbstractions v0.9.19
   [ba0b0d4f] Krylov v0.9.6
-  [929cbde3] LLVM v7.1.0
+  [929cbde3] LLVM v7.2.1
   [8b046642] LLVMLoopInfo v1.0.0
   [b964fa9f] LaTeXStrings v1.3.1
   [10f19ff3] LayoutPointers v0.1.15
   [0e77f7df] LazilyInitializedFields v1.2.2
-  [5078a376] LazyArrays v2.0.2
+  [5078a376] LazyArrays v2.0.4
   [2d8b4e74] LevyArea v1.0.0
   [d3d80556] LineSearches v7.2.0
   [7ed4a6bd] LinearSolve v2.30.1
-  [2ab3a3ac] LogExpFunctions v0.3.27
+  [2ab3a3ac] LogExpFunctions v0.3.28
   [bdcacae8] LoopVectorization v0.12.170
   [d8e11817] MLStyle v0.4.17
   [f1d291b0] MLUtils v0.4.4
@@ -145,14 +145,14 @@
   [5da4648a] NVTX v0.3.4
   [77ba4419] NaNMath v1.0.2
   [71a1bf82] NameResolution v0.1.5
-  [8913a72c] NonlinearSolve v3.12.3
+  [8913a72c] NonlinearSolve v3.12.4
   [d8793406] ObjectFile v0.4.1
   [6fe1bfb0] OffsetArrays v1.14.0
   [0b1bfda6] OneHotArrays v0.2.5
   [429524aa] Optim v1.9.4
   [3bd65402] Optimisers v0.3.3
   [bac558e1] OrderedCollections v1.6.3
-  [1dea7af3] OrdinaryDiffEq v6.80.1
+  [1dea7af3] OrdinaryDiffEq v6.81.1
   [90014a1f] PDMats v0.11.31
   [65ce6f38] PackageExtensionCompat v1.0.2
   [d96e819e] Parameters v0.12.3
@@ -162,7 +162,7 @@
   [1d0040c9] PolyesterWeave v0.2.1
   [2dfb63ee] PooledArrays v1.4.3
   [85a6dd25] PositiveFactorizations v0.2.4
-  [d236fae5] PreallocationTools v0.4.21
+  [d236fae5] PreallocationTools v0.4.22
   [aea7be01] PrecompileTools v1.2.1
   [21216c6a] Preferences v1.4.3
   [8162dcfd] PrettyPrint v0.2.0
@@ -174,7 +174,7 @@
   [e6cf234a] RandomNumbers v1.5.3
   [c1ae055f] RealDot v0.1.0
   [3cdcf5f2] RecipesBase v1.3.4
-  [731186ca] RecursiveArrayTools v3.20.0
+  [731186ca] RecursiveArrayTools v3.22.0
   [f2c3362d] RecursiveFactorization v0.2.23
   [189a3867] Reexport v1.2.2
   [2792f1a3] RegistryInstances v0.1.0
@@ -185,7 +185,7 @@
   [7e49a35a] RuntimeGeneratedFunctions v0.5.13
   [94e857df] SIMDTypes v0.1.0
   [476501e8] SLEEFPirates v0.6.42
-  [0bca4576] SciMLBase v2.39.0
+  [0bca4576] SciMLBase v2.40.0
   [c0aeaf25] SciMLOperators v0.3.8
   [1ed8b502] SciMLSensitivity v7.60.1
   [53ae85a6] SciMLStructures v1.2.0
@@ -205,7 +205,7 @@
   [171d559e] SplittablesBase v0.1.15
   [aedffcd0] Static v0.8.10
   [0d7ed370] StaticArrayInterface v1.5.0
-  [90137ffa] StaticArrays v1.9.4
+  [90137ffa] StaticArrays v1.9.5
   [1e83bf80] StaticArraysCore v1.4.2
   [82ae8749] StatsAPI v1.7.0
   [2913bbd2] StatsBase v0.34.3
@@ -221,7 +221,7 @@
   [8290d209] ThreadingUtilities v0.5.2
   [a759f4b9] TimerOutputs v0.5.24
   [9f7883ad] Tracker v0.2.34
-  [3bb67fe8] TranscodingStreams v0.10.8
+  [3bb67fe8] TranscodingStreams v0.10.9
 ⌃ [28d57a85] Transducers v0.4.80
   [d5829a12] TriangularSolve v0.2.0
   [410a4b4d] Tricks v0.1.8
@@ -237,7 +237,7 @@
   [4ee394cb] CUDA_Driver_jll v0.9.0+0
   [76a88914] CUDA_Runtime_jll v0.14.0+1
 ⌅ [62b44479] CUDNN_jll v9.0.0+1
-  [7cc45869] Enzyme_jll v0.0.117+0
+  [7cc45869] Enzyme_jll v0.0.119+0
   [2e619515] Expat_jll v2.6.2+0
   [f8c6e375] Git_jll v2.44.0+2
   [1d5cc7b8] IntelOpenMP_jll v2024.1.0+0
@@ -298,4 +298,4 @@
   [8e850b90] libblastrampoline_jll v5.8.0+1
   [8e850ede] nghttp2_jll v1.52.0+1
   [3f19e933] p7zip_jll v17.4.0+2
-Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

+Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

diff --git a/dev/problems/index.html b/dev/problems/index.html index f541ea5..7a8efb5 100644 --- a/dev/problems/index.html +++ b/dev/problems/index.html @@ -31,4 +31,4 @@

Defines a Parabolic Partial Differential Equation of the form:

\[\begin{aligned} \frac{du}{dt} &= \tfrac{1}{2} \text{Tr}(\sigma \sigma^T) \Delta u(x, t) + \mu \nabla u(x, t) \\ &\quad + f(x, u(x, t), ( \nabla_x u )(x, t), p, t) -\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

+\end{aligned}\]

Arguments

Optional Arguments

source
Note

While choosing to define a PDE using PIDEProblem, note that the function being integrated f is a function of f(x, y, v_x, v_y, ∇v_x, ∇v_y) out of which y is the integrating variable and x is constant throughout the integration. If a PDE has no integral and the non linear term f is just evaluated as f(x, v_x, ∇v_x) then we suggest using ParabolicPDEProblem

diff --git a/dev/tutorials/deepbsde/index.html b/dev/tutorials/deepbsde/index.html index a2cc4eb..bcd791f 100644 --- a/dev/tutorials/deepbsde/index.html +++ b/dev/tutorials/deepbsde/index.html @@ -67,4 +67,4 @@ Dense(hls,hls,relu), Dense(hls,d)) pdealg = NNPDENS(u0, σᵀ∇u, opt=opt)

And now we solve the PDE. Here, we say we want to solve the underlying neural SDE using the Euler-Maruyama SDE solver with our chosen dt=0.2, do at most 150 iterations of the optimizer, 100 SDE solves per loss evaluation (for averaging), and stop if the loss ever goes below 1f-6.

ans = solve(prob, pdealg, verbose=true, maxiters=150, trajectories=100,
-                            alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
+ alg=EM(), dt=0.2, pabstol = 1f-6)

References

  1. Shinde, A. S., and K. C. Takale. "Study of Black-Scholes model and its applications." Procedia Engineering 38 (2012): 270-279.
diff --git a/dev/tutorials/deepsplitting/index.html b/dev/tutorials/deepsplitting/index.html index 6baee99..69cf18f 100644 --- a/dev/tutorials/deepsplitting/index.html +++ b/dev/tutorials/deepsplitting/index.html @@ -41,4 +41,4 @@ abstol = 2e-3, maxiters = 1000, batch_size = 1000, - use_cuda=true) + use_cuda=true) diff --git a/dev/tutorials/mlp/index.html b/dev/tutorials/mlp/index.html index 235f8d7..b2a49b5 100644 --- a/dev/tutorials/mlp/index.html +++ b/dev/tutorials/mlp/index.html @@ -31,4 +31,4 @@ ## Definition of the algorithm alg = MLP(mc_sample = mc_sample ) -sol = solve(prob, alg, multithreading=true) +sol = solve(prob, alg, multithreading=true) diff --git a/dev/tutorials/nnkolmogorov/index.html b/dev/tutorials/nnkolmogorov/index.html index 6fa4cfd..dd45af7 100644 --- a/dev/tutorials/nnkolmogorov/index.html +++ b/dev/tutorials/nnkolmogorov/index.html @@ -25,4 +25,4 @@ alg = NNKolmogorov(m, opt) m = Chain(Dense(d, 16, elu), Dense(16, 32, elu), Dense(32, 16, elu), Dense(16, 1)) sol = solve(prob, alg, sdealg, verbose = true, dt = 0.01, - dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) + dx = 0.0001, trajectories = 1000, abstol = 1e-6, maxiters = 300) diff --git a/dev/tutorials/nnparamkolmogorov/index.html b/dev/tutorials/nnparamkolmogorov/index.html index 5751891..489944a 100644 --- a/dev/tutorials/nnparamkolmogorov/index.html +++ b/dev/tutorials/nnparamkolmogorov/index.html @@ -43,4 +43,4 @@ p_sigma_test = rand(p_domain.p_sigma[1]:dps.p_sigma:p_domain.p_sigma[2], 1, 1) t_test = rand(tspan[1]:dt:tspan[2], 1, 1) p_mu_test = nothing -p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
+p_phi_test = nothing
sol.ufuns(x_test, t_test, p_sigma_test, p_mu_test, p_phi_test)
diff --git a/dev/tutorials/nnstopping/index.html b/dev/tutorials/nnstopping/index.html index 964db44..dfc7c6a 100644 --- a/dev/tutorials/nnstopping/index.html +++ b/dev/tutorials/nnstopping/index.html @@ -21,4 +21,4 @@ for i in 1:N]
Note

The number of models should be equal to the time discritization.

And finally we define our optimizer and algorithm, and call solve:

opt = Flux.Optimisers.Adam(0.01)
 alg = NNStopping(models, opt)
 
-sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)
+sol = solve(prob, alg, SRIW1(); dt = dt, trajectories = 1000, maxiters = 1000, verbose = true)