Installation¶
Prerequisites¶
Before installing AutoTransformers, make sure you have:
Access to the DeepOpinion Gemfury Repository, or access to the AutoTransformers Git Repository (for installing from source)
A Product Key that is valid for AutoTransformers
Place the product key in the AT_PRODUCT_KEY
environment variable. You can do this either in the command line before running AutoTransformers, within the Python script where you are using AutoTransformers or you can update the ~/.bashrc
file on Linux to have it available for all scripts on this system.
Command line:
export AT_PRODUCT_KEY=<YOUR.PRODUCT.KEY>
Python:
import os
os.environ["AT_PRODUCT_KEY"] = """YOUR.PRODUCT.KEY"""
.bashrc file:
Append the following line to the ~/.bashrc
file:
export AT_PRODUCT_KEY=<YOUR.PRODUCT.KEY>
Installation (Linux)¶
Run the following commands in your terminal to install AutoTransformers. Replace <YOUR_GEMFURY_KEY> with your key for the DeepOpinion Gemfury Repository.
python -m venv env
source env/bin/activate
pip install --upgrade pip
pip install autotransformers --extra-index-url=https://<YOUR_GEMFURY_KEY>:@pypi.fury.io/deepopinion
(Optional) Install FlashAttentions DocumentModelV4 memory improvements¶
The default document model — DocumentModelV4 — makes extensive use of several performance, memory and speedup improvements. Some of those are optional and only available if you have installed CUDA >11.4 as well as PyTorch > 1.12. If you have installed those, you can install the FlashAttentions library to enable those improvements. If you do not have those installed, you can skip this step. Install FlashAttentions with the following command:
pip install flash-attn
Next¶
Congratulations, you successfully installed the AutoTransformers library. You can now continue with our getting started tutorials to learn how you can use this library to transform your data into machine learning models.