The Abraia Python SDK provides and easy and practical way to develop and deploy Machine Learning image applications on the edge. You can easily annotate and train your custom deep learning model with DeepLab, and deploy the model with this Python SDK.
Abraia is a Python SDK and CLI which can be installed on Windows, Mac, and Linux:
python -m pip install -U abraia
To use the SDK you have to configure your Id and Key as environment variables:
export ABRAIA_ID=user_id
export ABRAIA_KEY=user_key
On Windows you need to use set
instead of export
:
set ABRAIA_ID=user_id
set ABRAIA_KEY=user_key
Annotate your images and train a state-of-the-art model for classification, detection, or segmentation using DeepLab. You can directly load and run the model on the edge using the browser or this Python SDK.
Detect objects with a pre-trained YOLOv8 model on images, videos, or even camera streams.
from abraia import detect
model_uri = f"multiple/models/yolov8n.onnx"
model = detect.load_model(model_uri)
im = detect.load_image('people-walking.png').convert('RGB')
results = model.run(im, confidence=0.5, iou_threshold=0.5)
im = detect.render_results(im, results)
im.show()
To run a multi-object detector on video or directly on a camera stream, you just need to use the Video class to process every frame as is done for images.
import numpy as np
from PIL import Image
from abraia import detect
model_uri = f"multiple/models/yolov8n.onnx"
model = detect.load_model(model_uri)
video = detect.Video('people-walking.mp4')
for frame in video:
results = model.run(frame, confidence=0.5, iou_threshold=0.5)
frame = detect.render_results(frame, results)
video.show(frame)
Identify people on images with face recognition as shown bellow.
import os
import numpy as np
from abraia.draw import load_image, save_image, render_results
from abraia.faces import Recognition
img = load_image('images/rolling-stones.jpg')
out = img.copy()
recognition = Recognition()
results = recognition.represent_faces(img)
index = []
for src in ['mick-jagger.jpg', 'keith-richards.jpg', 'ronnie-wood.jpg', 'charlie-watts.jpg']:
img = load_image(f"images/{src}")
rslt = recognition.represent_faces(img)[0]
index.append({'name': os.path.splitext(src)[0], 'embeddings': rslt['embeddings']})
result = recognition.identify_faces(results, index)
render_results(out, results)
save_image('images/rolling-stones-identified.jpg', out)
Automatically blur car license plates in videos with just a few lines of code.
from abraia import detect
from abraia import draw
model_uri = 'multiple/models/alpd-seg.onnx'
model = detect.load_model(model_uri)
src = 'images/cars.mp4'
video = detect.Video(src, output='images/blur.mp4')
for k, frame in enumerate(video):
results = model.run(frame)
for result in results:
polygon = detect.approximate_polygon(result['polygon'])
frame = draw.draw_blurred_polygon(frame, polygon)
video.write(frame)
video.show(frame)
Automatically recognize car license plates in images and video streams.
from abraia import draw
from abraia.alpr import ALPR
alpr = ALPR()
img = draw.load_image('images/car.jpg')
results = alpr.detect(img)
results = alpr.recognize(img, results)
results = [result for result in results if len(result['lines'])]
for result in results:
result['label'] = '\n'.join([line.get('text', '') for line in result['lines']])
del result['confidence']
frame = draw.render_results(img, results)
draw.show_image(img)
Abraia provides a direct interface to load and save images. You can easily load and show the image, load the file metadata, or save the image as a new one.
from abraia import Abraia
abraia = Abraia()
im = abraia.load_image('usain.jpg')
abraia.save_image('usain.png', im)
im.show()
Read the image metadata and save it as a JSON file.
metadata = abraia.load_metadata('usain.jpg')
abraia.save_json('usain.json', metadata)
{'FileType': 'JPEG',
'MIMEType': 'image/jpeg',
'JFIFVersion': 1.01,
'ResolutionUnit': 'None',
'XResolution': 1,
'YResolution': 1,
'Comment': 'CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 80\n',
'ImageWidth': 640,
'ImageHeight': 426,
'EncodingProcess': 'Baseline DCT, Huffman coding',
'BitsPerSample': 8,
'ColorComponents': 3,
'YCbCrSubSampling': 'YCbCr4:2:0 (2 2)',
'ImageSize': '640x426',
'Megapixels': 0.273}
Upload a local src
file to the cloud path
and return the list of files
and folders
on the specified cloud folder
.
import pandas as pd
folder = 'test/'
abraia.upload_file('images/usain-bolt.jpeg', folder)
files, folders = abraia.list_files(folder)
pd.DataFrame(files)
To list the root folder just omit the folder value.
You can download or remove an stored file just specifying its path
.
path = 'test/birds.jpg'
dest = 'images/birds.jpg'
abraia.download_file(path, dest)
abraia.remove_file(path)
The Abraia CLI provides access to the Abraia Cloud Platform through the command line. It provides a simple way to manage your files and enables the resize and conversion of different image formats. It is an easy way to compress your images for web - JPEG, WebP, or PNG -, and get then ready to publish on the web.
To compress an image you just need to specify the input and output paths for the image:
abraia convert images/birds.jpg images/birds_o.jpg
To resize and optimize and image maintaining the aspect ratio is enough to specify the width
or the height
of the new image:
abraia convert --width 500 images/usain-bolt.jpeg images/usaint-bolt_500.jpeg
You can also automatically change the aspect ratio specifying both width
and height
parameters and setting the resize mode
(pad, crop, thumb):
abraia convert --width 333 --height 333 --mode pad images/lion.jpg images/lion_333x333.jpg
abraia convert --width 333 --height 333 images/lion.jpg images/lion_333x333.jpg
So, you can automatically resize all the images in a specific folder preserving the aspect ration of each image just specifying the target width
or height
:
abraia convert --width 300 [path] [dest]
Or, automatically pad or crop all the images contained in the folder specifying both width
and height
:
abraia convert --width 300 --height 300 --mode crop [path] [dest]
The Multiple extension provides seamless integration of multispectral and hyperspectral images. It has being developed by ABRAIA in the Multiple project to extend the Abraia SDK and Cloud Platform providing support for straightforward HyperSpectral Image (HSI) analysis and classification.
Just click on one of the available Colab's notebooks to directly start testing the multispectral capabilities:
Or install the multiple extension to use the Abraia-Multiple SDK:
python -m pip install -U "abraia[multiple]"
For instance, you can directly load and save ENVI files, and their metadata.
from abraia.multiple import Multiple
multiple = Multiple()
img = multiple.load_image('test.hdr')
meta = multiple.load_metadata('test.hdr')
multiple.save_image('test.hdr', img, metadata=meta)
To start with, we may upload some data directly using the graphical interface, or using the multiple api:
multiple.upload_file('PaviaU.mat')
Now, we can load the hyperspectral image data (HSI cube) directly from the cloud:
img = multiple.load_image('PaviaU.mat')
Hyperspectral images cannot be directly visualized, so we can get some random bands from our HSI cube, and visualize these bands as like any other monochannel image.
from abraia.multiple import hsi
imgs, indexes = hsi.random(img)
hsi.plot_images(imgs, cmap='jet')
A common operation with spectral images is to reduce the dimensionality, applying principal components analysis (PCA). We can get the first three principal components into a three bands pseudoimage, and visualize this pseudoimage.
pc_img = hsi.principal_components(img)
hsi.plot_image(pc_img, 'Principal components')
Two classification models are directly available for automatic identification on hysperspectral images. One is based on support vector machines ('svm') while the other is based on deep image classification ('hsn'). Both models are available under a simple interface like bellow:
n_bands, n_classes = 30, 17
model = hsi.create_model('hsn', (25, 25, n_bands), n_classes)
model.train(X, y, train_ratio=0.3, epochs=5)
y_pred = model.predict(X)
This software is licensed under the MIT License. View the license.