Skip to content

Revision all imports after change in main #53

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Nov 29, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/build-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
container: dokken92/dolfinx_custom:10092021
container: dokken92/dolfinx_custom:28112021

env:
HDF5_MPI: "ON"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docker-image.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ jobs:
run: echo ${{ secrets.DOCKERHUB_TOKEN }} | docker login -u ${{ secrets.DOCKERHUB_USERNAME }} --password-stdin
- name: Push to the DockerHub registry
run: |
docker push dokken92/dolfinx_custom:10092021
docker push dokken92/dolfinx_custom:28112021
7 changes: 2 additions & 5 deletions .github/workflows/main-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,6 @@ jobs:

env:
HDF5_MPI: "ON"
CC: mpicc
HDF5_DIR: "/usr/lib/x86_64-linux-gnu/hdf5/mpich/"
DISPLAY: ":99.0"
PYVISTA_OFF_SCREEN: true

# Steps represent a sequence of tasks that will be executed as part of the job
Expand All @@ -39,12 +36,12 @@ jobs:

- name: Install dependencies
run: |
wget -qO - https://deb.nodesource.com/setup_16.x | bash
wget -qO - https://deb.nodesource.com/setup_17.x | bash
apt-get -qq update
apt-get install -y libgl1-mesa-dev xvfb nodejs
apt-get clean
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
pip3 install --no-cache-dir --no-binary=h5py h5py meshio
CC=mpicc HDF_MPI="ON" HDF5_DIR="/usr/lib/x86_64-linux-gnu/hdf5/mpich/" pip3 install --no-cache-dir --no-binary=h5py h5py meshio
pip3 install --no-cache-dir tqdm pandas seaborn
pip3 install notebook nbconvert jupyter-book myst_parser pyvista jupyterlab
pip3 install --no-cache-dir matplotlib itkwidgets ipywidgets setuptools --upgrade
Expand Down
5 changes: 4 additions & 1 deletion Changelog.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# Changelog

## Dev
## Dev
- Various API changes relating to the import structure of DOLFINx

## 0.3.0 (09.09.2021)
- Major improvements in [Form compiler parameters](chapter4/compiler_parameters), using pandas and seaborn for visualization of speed-ups gained using form compiler parameters.
- API change: `dolfinx.cpp.la.scatter_forward(u.x)` -> `u.x.scatter_forward`
- Various plotting updates due to new version of pyvista.
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM dokken92/dolfinx_custom:03112021
FROM dokken92/dolfinx_custom:28112021

# create user with a home directory
ARG NB_USER
Expand Down
95 changes: 48 additions & 47 deletions chapter1/fundamentals_code.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"However, in cases where we have a solution we know that should have no approximation error, we know that the solution should\n",
"be produced to machine precision by the program.\n",
"\n",
"The first eight lines of the program are importing the different modules required for solving the problem."
"We start by importing external modules used alongside `dolfinx`."
]
},
{
Expand All @@ -60,7 +60,6 @@
"metadata": {},
"outputs": [],
"source": [
"import dolfinx\n",
"import numpy\n",
"import ufl\n",
"from mpi4py import MPI"
Expand All @@ -70,9 +69,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"A major difference between a traditional FEniCS code and a FEniCSx code, is that one is not advised to use the wildcard import.\n",
"A major difference between a traditional FEniCS code and a FEniCSx code, is that one is not advised to use the wildcard import. We will see this throughout this first example.\n",
"## Generating simple meshes\n",
"The next step is to define the discrete domain, _the mesh_\n"
"The next step is to define the discrete domain, _the mesh_. We do this by importing the built-in `dolfinx.generation.UnitSquareMesh` function, that can build a $[0,1]\\times[0,1]$ mesh, consisting of either triangles or quadrilaterals."
]
},
{
Expand All @@ -81,7 +80,9 @@
"metadata": {},
"outputs": [],
"source": [
"mesh = dolfinx.UnitSquareMesh(MPI.COMM_WORLD, 8, 8, dolfinx.cpp.mesh.CellType.quadrilateral)"
"from dolfinx.generation import UnitSquareMesh\n",
"from dolfinx.mesh import CellType\n",
"mesh = UnitSquareMesh(MPI.COMM_WORLD, 8, 8, CellType.quadrilateral)"
]
},
{
Expand Down Expand Up @@ -111,7 +112,8 @@
"metadata": {},
"outputs": [],
"source": [
" V = dolfinx.FunctionSpace(mesh, (\"CG\", 1))"
"from dolfinx.fem import FunctionSpace\n",
"V = FunctionSpace(mesh, (\"CG\", 1))"
]
},
{
Expand All @@ -124,12 +126,15 @@
"and hexahedra). For an overview, see:\n",
"*FIXME: Add link to all the elements we support*\n",
"\n",
"The element degree in the code is 1. This means that we are choosing the standard $P_1$ linear Lagrange element, which has degrees of freedom at the vertices. The computed solution will be continuous across elements and linearly varying in $x$ and $y$ inside each element. Higher degree polynomial approximations are obtained by increasing the degree argument. \n",
"The element degree in the code is 1. This means that we are choosing the standard $P_1$ linear Lagrange element, which has degrees of freedom at the vertices. \n",
"The computed solution will be continuous across elements and linearly varying in $x$ and $y$ inside each element. Higher degree polynomial approximations are obtained by increasing the degree argument. \n",
"\n",
"## Defining the boundary conditions\n",
"\n",
"The next step is to specify the boundary condition $u=u_D$ on $\\partial\\Omega_D$, which is done by over several steps. The first step is to define the function $u_D$. Into this function, we would like to interpolate the boundary condition $1 + x^2+2y^2$. We do this by first defining a `dolfinx.Function`, and then using a [lambda-function](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) in Python to define the \n",
"spatially varying function. As we would like this program to work on one or multiple processors, we have to update the coefficients of $u_D$ that data shared between the processors. We do this by sending (scattering) the ghost values in the underlying data structure of `uD`.\n"
"The next step is to specify the boundary condition $u=u_D$ on $\\partial\\Omega_D$, which is done by over several steps. \n",
"The first step is to define the function $u_D$. Into this function, we would like to interpolate the boundary condition $1 + x^2+2y^2$.\n",
"We do this by first defining a `dolfinx.Function`, and then using a [lambda-function](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) in Python to define the \n",
"spatially varying function.\n"
]
},
{
Expand All @@ -138,9 +143,9 @@
"metadata": {},
"outputs": [],
"source": [
"uD = dolfinx.Function(V)\n",
"uD.interpolate(lambda x: 1 + x[0]**2 + 2 * x[1]**2)\n",
"uD.x.scatter_forward()"
"from dolfinx.fem import Function\n",
"uD = Function(V)\n",
"uD.interpolate(lambda x: 1 + x[0]**2 + 2 * x[1]**2)"
]
},
{
Expand All @@ -150,7 +155,7 @@
"We now have the boundary data (and in this case the solution of \n",
"the finite element problem) represented in the discrete function space.\n",
"Next we would like to apply the boundary values to all degrees of freedom that are on the boundary of the discrete domain. We start by identifying the facets (line-segments) representing the outer boundary, using `dolfinx.cpp.mesh.compute_boundary_facets`.\n",
"This function returns an array of booleans of the same size as the number of facets on this processor, where `True` indicates that the local facet $i$ is on the boundary. To reduce this to only the indices that are `True`, we use `numpy.where`.\n"
"This function returns an array of booleans of the same size as the number of facets on this processor, where `True` indicates that the local facet $i$ is on the boundary. To reduce this to only the indices that are `True`, we use `numpy.where`."
]
},
{
Expand All @@ -161,8 +166,9 @@
"source": [
"fdim = mesh.topology.dim - 1\n",
"# Create facet to cell connectivity required to determine boundary facets\n",
"from dolfinx.cpp.mesh import compute_boundary_facets\n",
"mesh.topology.create_connectivity(fdim, mesh.topology.dim)\n",
"boundary_facets = numpy.where(numpy.array(dolfinx.cpp.mesh.compute_boundary_facets(mesh.topology)) == 1)[0]"
"boundary_facets = numpy.where(numpy.array(compute_boundary_facets(mesh.topology)) == 1)[0]"
]
},
{
Expand All @@ -185,8 +191,9 @@
"metadata": {},
"outputs": [],
"source": [
"boundary_dofs = dolfinx.fem.locate_dofs_topological(V, fdim, boundary_facets)\n",
"bc = dolfinx.DirichletBC(uD, boundary_dofs)"
"from dolfinx.fem import locate_dofs_topological, DirichletBC\n",
"boundary_dofs = locate_dofs_topological(V, fdim, boundary_facets)\n",
"bc = DirichletBC(uD, boundary_dofs)"
]
},
{
Expand Down Expand Up @@ -223,7 +230,8 @@
"metadata": {},
"outputs": [],
"source": [
"f = dolfinx.Constant(mesh, -6)"
"from dolfinx.fem import Constant\n",
"f = Constant(mesh, -6)"
]
},
{
Expand Down Expand Up @@ -273,7 +281,8 @@
"metadata": {},
"outputs": [],
"source": [
"problem = dolfinx.fem.LinearProblem(a, L, bcs=[bc], petsc_options={\"ksp_type\": \"preonly\", \"pc_type\": \"lu\"})\n",
"from dolfinx.fem import LinearProblem\n",
"problem = LinearProblem(a, L, bcs=[bc], petsc_options={\"ksp_type\": \"preonly\", \"pc_type\": \"lu\"})\n",
"uh = problem.solve()"
]
},
Expand All @@ -292,27 +301,27 @@
"metadata": {},
"outputs": [],
"source": [
"V2 = dolfinx.FunctionSpace(mesh, (\"CG\", 2))\n",
"uex = dolfinx.Function(V2)\n",
"uex.interpolate(lambda x: 1 + x[0]**2 + 2 * x[1]**2)\n",
"uex.x.scatter_forward()"
"V2 = FunctionSpace(mesh, (\"CG\", 2))\n",
"uex = Function(V2)\n",
"uex.interpolate(lambda x: 1 + x[0]**2 + 2 * x[1]**2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We compute the error in two different ways. First, we compute the $L^2$-norm of the error, defined by $E=\\sqrt{\\int_\\Omega (u_D-u_h)^2\\mathrm{d} x}$. We use UFL to express the $L^2$-error:"
"We compute the error in two different ways. First, we compute the $L^2$-norm of the error, defined by $E=\\sqrt{\\int_\\Omega (u_D-u_h)^2\\mathrm{d} x}$. We use UFL to express the $L^2$-error, and use `dolfinx.fem.assemble_scalar` to compute the scalar value."
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"from dolfinx.fem import assemble_scalar\n",
"L2_error = ufl.inner(uh - uex, uh - uex) * ufl.dx\n",
"error_L2 = numpy.sqrt(dolfinx.fem.assemble_scalar(L2_error))"
"error_L2 = numpy.sqrt(assemble_scalar(L2_error))"
]
},
{
Expand All @@ -329,15 +338,15 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Error_L2 : 8.24e-03\n",
"Error_max : 2.22e-15\n"
"Error_max : 3.11e-15\n"
]
}
],
Expand All @@ -361,7 +370,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -371,7 +380,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -388,7 +397,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -409,13 +418,13 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "e67d55e5b10847bc8134ae036945a0bf",
"model_id": "c8d829e8954c46699aca8c42cd8b5b12",
"version_major": 2,
"version_minor": 0
},
Expand All @@ -442,19 +451,19 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Interactive plotting in notebooks\n",
"To create an interactive plot using pyvista in a Jupyter notebook we us `pyvista.ITKPlotter` which uses `itkwidgets`. To modify the visualization, click on the three bars in the top left corner."
"## Alternative plotting\n",
"To create a more interactive plot using pyvista in a Jupyter notebook we us `pyvista.ITKPlotter` which uses `itkwidgets`. To modify the visualization, click on the three bars in the top left corner."
]
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "f352aee90a2c403cadaa4c4885907023",
"model_id": "5a5bdbffdc324a548691f67bb09a5a4c",
"version_major": 2,
"version_minor": 0
},
Expand Down Expand Up @@ -484,17 +493,9 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2021-09-10 08:49:40.089 ( 4.500s) [main ] VTKFile.cpp:617 WARN| Output data is interpolated into a first order Lagrange space.\n"
]
}
],
"outputs": [],
"source": [
"import dolfinx.io\n",
"with dolfinx.io.VTKFile(MPI.COMM_WORLD, \"output.pvd\", \"w\") as vtk:\n",
Expand Down Expand Up @@ -537,7 +538,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
"version": "3.9.7"
}
},
"nbformat": 4,
Expand Down
Loading