Skip to content

peghaz/FENNs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

62 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

FENNs: Finite Element Neural Networks

An explainable neural network architecture for solving 1D initial and boundary value problems using physics-informed learning combined with finite element methods.

๐ŸŽฏ Overview

FENNs (Finite Element Neural Networks) is a novel approach to solving partial differential equations (PDEs) that combines the interpretability of finite element methods with the flexibility of neural networks. Unlike traditional Physics-Informed Neural Networks (PINNs), FENNs use explicit polynomial basis functions constructed on adaptive mesh elements, making the learned solution both accurate and interpretable.

Key Features

  • Explainable Architecture: Uses Lagrange polynomial basis functions with clear physical interpretation
  • Adaptive Mesh Refinement: Automatic h-refinement and r-refinement based on residual error
  • Variational Formulation: Minimizes energy functionals for robust convergence
  • Compact & Hierarchical Bases: Supports both compact (nodal) and hierarchical polynomial representations
  • GPU Acceleration: Full PyTorch implementation with CUDA support

๐Ÿ—๏ธ Architecture

Core Concepts

FENNs discretize the domain into finite elements and represent the solution as:

$$u_h(x) = \sum_{i=1}^{N} U_i \phi_i(x)$$

where:

  • $U_i$ are learnable nodal values (network parameters)
  • $\phi_i(x)$ are polynomial basis functions (Lagrange polynomials)
  • $N$ is the number of nodes in the mesh

Basis Functions

The architecture supports two types of polynomial bases:

  1. Compact Bases (modules/compact_bases/):

    • Lagrange interpolation polynomials
    • Local support on each element
    • $C^0$ continuity across elements
  2. Hierarchical Bases (modules/hierarchical_bases/):

    • Hierarchical polynomial representation
    • Efficient for p-adaptivity
    • Better conditioning for higher degrees

Building Blocks

The modules/blocks.py file provides interpretable neural network components:

  • LinearBlock: ReLU-based piecewise linear activations
  • MultiplicationBlock: Composes blocks to form higher-degree polynomials
  • InversionBlock: For handling singularities and complex boundary conditions

๐Ÿ“Š Supported Problems

1. Poisson Equation (Beam Stress Problem)

-โˆ‡ยท(Dโˆ‡u) = f(x)  in ฮฉ
u = uโ‚€             on โˆ‚ฮฉ

Implementation: models/beam_stress_poisson1D.py

Energy functional minimized: $$J[u] = \frac{1}{2}\int_\Omega D(\nabla u)^2 dx - \int_\Omega f u , dx$$

2. Convection-Diffusion Equation

-Dโˆ‡ยฒu + cโˆ‡u = 0  in ฮฉ
u = uโ‚€, u = uโ‚—    on โˆ‚ฮฉ

Implementation: models/convection_diffusion_1D.py

Features:

  • Exact solution comparison
  • Lยฒ error computation
  • Adaptive refinement for boundary layers

๐Ÿš€ Installation

Prerequisites

FEniCS is required for validation and comparison. Install using conda:

conda create -n fenns python=3.10
conda activate fenns
conda install -c conda-forge fenics-dolfinx mpich pyvista

Package Installation

Inside your conda environment, install UV package manager and dependencies:

pip install uv
uv pip install -e .

This will install:

  • PyTorch (with CUDA 12.6 support on Linux/Windows)
  • NumPy
  • Matplotlib

๐Ÿ’ป Usage

Basic Example: Solving Convection-Diffusion

from models.convection_diffusion_1D import ConvectionDiffusion1D
import torch

# Initialize model
model = ConvectionDiffusion1D(
    poly_deg=2,              # Polynomial degree per element
    num_elements=20,         # Number of finite elements
    eval_pts_per_elem=50,    # Evaluation points per element
    refine_alpha=0.6,        # Refinement threshold
    max_r_adapt=5,           # Max r-adaptivity iterations
    device="cuda"            # Use GPU
)

# Training loop
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

for epoch in range(1000):
    optimizer.zero_grad()
    
    # Compute energy functional
    J, convection, diffusion = model.forward()
    
    # Apply boundary conditions
    bc_penalty = (model.U_nodes[0] - model.u0)**2 + \
                 (model.U_nodes[-1] - model.uL)**2
    
    loss = J + 1000.0 * bc_penalty
    loss.backward()
    optimizer.step()
    
    if epoch % 100 == 0:
        error = model.evaluate_C0_error()
        print(f"Epoch {epoch}: Loss={loss.item():.6f}, Error={error:.6f}")

# Visualize solution
model.plot_solution()
model.plot_exact_sol()

Adaptive Mesh Refinement

The architecture supports automatic mesh refinement:

# Perform h-refinement (split high-error elements)
model.h_adapt_mesh()

# Perform r-refinement (redistribute nodes)
model.r_adapt_mesh()

# Visualize refinement
model.visualize_mesh_adaptation()

๐Ÿ”ฌ Advantages Over Traditional PINNs

Feature FENNs Traditional PINNs
Interpretability โœ… Explicit polynomial bases โŒ Black-box activations
Mesh Adaptivity โœ… h/r-refinement โš ๏ธ Limited
Solution Continuity โœ… Guaranteed $C^0$ โš ๏ธ Depends on architecture
Convergence โœ… Monotonic (energy minimization) โš ๏ธ Oscillatory
Derivative Accuracy โœ… Exact polynomial derivatives โš ๏ธ Autodiff approximations
Physical Insight โœ… Node values = DOFs โŒ Opaque parameters

๐Ÿ“ Project Structure

FENNs/
โ”œโ”€โ”€ models/                          # Problem-specific implementations
โ”‚   โ”œโ”€โ”€ base_steady_state_1D.py     # Base class for 1D steady-state problems
โ”‚   โ”œโ”€โ”€ beam_stress_poisson1D.py    # Poisson equation solver
โ”‚   โ”œโ”€โ”€ convection_diffusion_1D.py  # Convection-diffusion solver
โ”‚   โ””โ”€โ”€ hierarchical/                # Hierarchical basis implementations
โ”‚       โ””โ”€โ”€ beam_stress_poisson1D.py
โ”œโ”€โ”€ modules/                         # Core neural network modules
โ”‚   โ”œโ”€โ”€ blocks.py                    # Interpretable activation blocks
โ”‚   โ”œโ”€โ”€ compact_bases/               # Lagrange polynomial bases
โ”‚   โ”‚   โ””โ”€โ”€ poly_general.py
โ”‚   โ””โ”€โ”€ hierarchical_bases/          # Hierarchical polynomial bases
โ”‚       โ”œโ”€โ”€ poly_deg2.py
โ”‚       โ””โ”€โ”€ poly_general.py
โ”œโ”€โ”€ utils/                           # Utilities and validation
โ”‚   โ”œโ”€โ”€ base_logger.py               # Training logger
โ”‚   โ”œโ”€โ”€ lagrange_funcs.py            # Lagrange interpolation utilities
โ”‚   โ””โ”€โ”€ simulations/                 # FEM validation solvers
โ”‚       โ”œโ”€โ”€ heat_equation_1D.py
โ”‚       โ””โ”€โ”€ solve_poisson_problem.py
โ””โ”€โ”€ notebooks/                       # Jupyter notebooks
    โ”œโ”€โ”€ Basis Functions.ipynb        # Basis function visualization
    โ”œโ”€โ”€ Linear Problem Setup.ipynb   # Problem formulation examples
    โ””โ”€โ”€ PINNs for Comparison.ipynb   # Comparison with traditional PINNs

๐ŸŽ“ Methodology

1. Weak Formulation

Instead of satisfying the PDE pointwise, FENNs minimize the energy functional:

$$J[u_h] = \int_\Omega \mathcal{L}[u_h, \nabla u_h] , dx$$

where $\mathcal{L}$ is the Lagrangian of the system.

2. Discretization

The domain $\Omega = [x_0, x_L]$ is divided into $N_e$ elements:

$$\Omega_e = [x_e, x_{e+1}], \quad e = 1, \ldots, N_e$$

On each element, the solution is approximated by:

$$u_h^e(x) = \sum_{i=1}^{p+1} U_i^e \phi_i^e(x)$$

where $p$ is the polynomial degree.

3. Optimization

Parameters (nodal values $U_i$) are optimized using gradient descent:

$$U_i^{(k+1)} = U_i^{(k)} - \eta \frac{\partial J}{\partial U_i}$$

4. Adaptivity

  • h-refinement: Split elements with high residual error
  • r-refinement: Redistribute nodes to minimize error gradients

๐Ÿ“Š Validation

The utils/simulations/ directory contains reference FEM solvers using FEniCS for validation:

from utils.simulations.solve_poisson_problem import solve_poisson_1d

# Get reference solution
u_fem, x_fem = solve_poisson_1d(num_elements=100)

# Compare with FENN solution
error = torch.norm(u_fenn - u_fem) / torch.norm(u_fem)
print(f"Relative Lยฒ error: {error:.2e}")

๐Ÿ”ฎ Future Directions

  • Extension to 2D/3D problems
  • Time-dependent problems (parabolic PDEs)
  • Nonlinear material models
  • Automatic differentiation for material tangents
  • Multi-physics coupling
  • p-adaptivity (adaptive polynomial order)

๐Ÿ“ Citation

If you use FENNs in your research, please consider citing:

@software{fenns2025,
  title={FENNs: Finite Element Neural Networks for Explainable PDE Solutions},
  author={Your Name},
  year={2025},
  url={https://github.com/yourusername/FENNs}
}

๐Ÿ“„ License

This project is open source and available under the MIT License.

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

๐Ÿ“ง Contact

For questions or collaborations, please open an issue on GitHub.


Note: This is a research project under active development. API may change between versions.

About

FENNs: Interpretable and Adaptive Mesh Refining Finite Element Neural Networks - An explainable neural network architecture that combines finite element basis functions with deep learning to solve 1D boundary and initial value problems. Offers interpretability and adaptive mesh refinement unlike black-box PINNs.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors