Skip to content

Commit

Permalink
Merge pull request #2 from NETMOUNTAINS/master
Browse files Browse the repository at this point in the history
Update for exabgp v4
  • Loading branch information
jbeker authored Sep 5, 2024
2 parents 2fe47b9 + de2c489 commit d474fa4
Showing 1 changed file with 40 additions and 1 deletion.
41 changes: 40 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,43 @@ Implementing the blocklists as a BGP feed that is then Null-routed on your route
* The blocklists you want to subscribe to
* The interval to refresh things (don't make it less than 30 minutes)
* The proper route announcement and withdrawal syntax for your setup
* Install golang-go
* Compile the `blocklist` application `go build blocklist.go`
* Install and configure [ExaBGP](https://github.com/Exa-Networks/exabgp)
* Get it peering with your router
* Have it use the `blocklist` application to provide routes
* [optional] If using a huge amount of prefixes set `exabgp.api.ack` in `/etc/exabgp.env` to `false`
* Fire it up

### Example `exabgp` Config File
### Example `exabgp` v4+ Config File
```
process droproutes {
run /wherever/you/put/the/application/blocklist;
encoder text;
}
template {
neighbor AS65332 {
router-id 192.168.1.1;
local-as 65332;
local-address 192.168.1.2;
peer-as 65256;
family {
ipv4 unicast;
ipv4 multicast;
}
api {
processes [ droproutes ];
}
}
}
neighbor 192.168.1.1 {
inherit AS65332;
}
```

### Example `exabgp` v3 Config File

```
group AS65332 {
Expand All @@ -57,6 +87,15 @@ group AS65332 {
}
```

## Troubleshooting

#### ExaBGP v4+ crashing with >10.000 prefixes
* Make sure you set exabgp.api.ack in /etc/exabgp.env to 'false'
```
[exabgp.api]
ack = false
```

## Motivation

While the [exabgp-edgerouter](https://github.com/infowolfe/exabgp-edgerouter) provided the functionality that I wanted, the performance was not ideal as the blocklists grew in size. For a list of approximately 2000 entries, it would take about 90 seconds to process, deduplicate, and consolidate into CIDR blocks. When I increased the lists I wanted to follow to ones that composed of approximated 45000 entries, the script was still running 90 minutes later. This wasn't going to work. So, I rewrote the algorithm to be more efficient. A 45000 long entry is now processed in under a second.

0 comments on commit d474fa4

Please sign in to comment.