Clickstat is a no BS URL shortener with IP and GPS logging. It logs information like IP address User-Agent and GPS-Coordinates if GPS Tracking is enabled. Access it at clickstat.xyz
-
URL Shortening: Efficiently convert long URLs into short, shareable links.
-
IP & User-Agent Logging: Capture IP addresses and user-agent data for each link click.
-
Optional GPS Tracking: Enable precise GPS tracking with user consent for enhanced location data.
-
Real-Time Click Stats: Access detailed click statistics, including IP, GPS (if enabled), and user-agent, via a unique identifier.
-
Secure & Private: Data is securely stored, with no third-party access or sharing.
Clickstat uses the following APIs to enhance its functionality. Both are optional, but certain features depend on them being configured:
-
Google Safe Browsing API: Used to check if URLs are malicious or flagged for phishing, malware, or other threats before shortening them. To enable URL verification, you need to include a valid API key and set the
VERIFY_URLenvironment variable totrue. IfVERIFY_URLis set tofalse, the check will be skipped, which is useful if you don't have access to the API or don't require URL verification.Get your API key here.
-
ipinfo.io API: This API is used for IP address lookups to gather geographic or other information about users who access shortened links. If you don't provide an API key, the IP lookup feature in the web app will be disabled, but all other features will continue to work normally.
Get your API key here.
Clone the repository:
git clone https://github.com/itsmmdoha/clickstatNavigate into the root directory:
cd clickstatThe root directory contains an app folder with all the Flask source files. For ease, we recommend running the dev environment with Docker. Clickstat uses PostgreSQL for its database and also integrates with the ipinfo.io API and Google Safe Browsing API for enhanced features.
To set up the database and API credentials, create a .env file in the root directory and include the following content:
# Database Credentials
PGPASSWORD=testPassword
PGUSER=HoundSec
PGPORT=5432
PGDATABASE=clickstat
VERIFY_URL=true # set to false to disable URL verification
# API Tokens
IP_INFO_TOKEN=<API token from ipinfo.io> #optional
SAFE_BROWSING_TOKEN=<API token from Google Safe Browsing API> #optional- If the
VERIFY_URLvariable is set tofalse, the Google Safe Browsing check will be skipped. This is useful if you don't have access to the API or don't need URL verification. - If you don't set the
IP_INFO_TOKEN, the IP lookup feature will not work, but all other features will continue to function as expected.
To start the development environment, make sure Docker and Docker Compose are installed, then run the following command:
docker-compose -f dev-compose.yaml up --buildThis will spin up two containers: one running the Flask app and another running the PostgreSQL database. You can now access the app at http://localhost:5000.
Clone the repository:
git clone https://github.com/itsmmdoha/clickstatNavigate into the root directory:
cd clickstatThe root directory contains an app folder with all the Flask app source files. For ease, we recommend running clickstat with Docker. Clickstat uses PostgreSQL for its database, the ipinfo.io API for IP lookup, and the Google Safe Browsing API for URL security checks.
To set up the database and API credentials, create a .env file in the root directory with the following content:
# Database Credentials
PGPASSWORD=<set-a-database-password>
PGUSER=HoundSec
PGPORT=5432
PGDATABASE=clickstat
VERIFY_URL=true # set to false to disable URL verification
# API Tokens
IP_INFO_TOKEN=<API token from ipinfo.io> #optional
SAFE_BROWSING_TOKEN=<API token from Google Safe Browsing API> #optional- If you don't want URL verification or don't have access to the Google Safe Browsing API, you can set
VERIFY_URLtofalse. The URL shortening will proceed without checking for malicious URLs. - If the
IP_INFO_TOKENis not set, IP lookups will be disabled, but the rest of the app will work normally.
Run the following command:
docker-compose up -dThis will spin up two containers: one running the Gunicorn WSGI server on port 8000 (mapped to localhost) and another running the PostgreSQL database.
For production, configure nginx as a proxy server and install an SSL certificate using certbot. Set up cron jobs for backup and restore of the database for data safety.
This version includes the VERIFY_URL option and explains how both APIs are optional depending on the functionality you want to enable.
To create a backup of a PostgreSQL database with pg_dump, use the following command. This will generate a plain-text SQL file without ownership information:
pg_dump -U <user> -h <database-host> -d <database-name> --no-owner -f 01_backup.sql-U: PostgreSQL username-h: Hostname of the PostgreSQL server-d: Name of the database to back up--no-owner: Excludes ownership information from the backup-f: Specifies the output file
You can also use the provided snap_db.sh script to automate the backup process. This script generates a plain-text SQL file without ownership information and stores it in the backups directory, which is mapped to the backups volume in the Docker container.
Before running the script, ensure it is executable by using the following command:
chmod +x snap_db.shAfter making it executable, you can take a backup with the following command:
./snap_db.shThis will create a backup file with the current date in the filename, stored in the backups directory.
- The backup file is generated using
pg_dumpand saved in the formatdb_backup<date>.sql. - The
snap_db.shscript is designed to store backups in thebackupsdirectory located at the root of your repository. - The
--no-ownerflag ensures that ownership information is excluded from the backup.
In the db service defined in the Docker Compose file, the db_init folder is mapped to the /docker-entrypoint-initdb.d/ directory inside the PostgreSQL container. The PostgreSQL image automatically executes any scripts placed in this directory in alphanumerical order when the container starts. This mechanism is useful for tasks like setting up databases, creating tables, and restoring data from backups.
To restore the database using this feature, follow these steps:
-
Prepare the Backup File: Rename the desired backup file from the
backupsdirectory to01_backup.sql. This ensures that the file is executed first, as scripts in thedb_initdirectory are run alphabetically. -
Place the File in the
db_initDirectory: Move the01_backup.sqlfile to thedb_initdirectory. Since this folder is mapped to/docker-entrypoint-initdb.d/in the PostgreSQL container for thedbservice, the script will be automatically executed on startup. -
Run the Docker Compose Setup: Start the Docker Compose setup:
docker-compose up -d
The PostgreSQL container in the
dbservice will detect the01_backup.sqlfile in the/docker-entrypoint-initdb.d/directory and execute it automatically, restoring the database.
- Ensure the backup file is in
plain-textformat. - Only one backup file should be in the
db_initdirectory at a time for proper restoration.