Skip to main content

Machine Learning Tasks

Once you’ve received Pod access, you are now ready to run model tasks! If you’d rather learn how to run SQL queries, see SQL Tasks.

Bitfount refers to training or executing models as performing tasks, while protocols, models, and algorithms are all task elements that need to be specified as part of a task. For more details on Bitfount’s definitions of these elements, see Bitfount Glossary. All task execution is tracked in a Pod’s activity history.

Before you run tasks, it’s always a good idea to determine:

  1. If there are any Pod policy restrictions which might dictate what tasks you can perform against a dataset in a given Pod based on the role you’ve been assigned.
  2. If the Pod is online. You can tell this by the green icon in the Pod’s card on the “My Pods” page. If the Pod is offline, the Pod owner will need to bring the Pod back online for you.
  3. The structure of the dataset upon which you are acting.

Model Training

There are several ways to train a model on remote Pods:

  1. Bitfount Python API
  2. YAML Configuration
  3. Docker

Modelling with the Bitfount Python API

The standard approach to model training using Bitfount is the Python API. We recommend using a notebook tool to train models and provide a tutorial with various examples in the “Training Models” tutorials leveraging Jupyter. See below for detailed instructions if you are using Bitfount default protocols and algorithms vs. custom models.

Using Bitfount-Supported Task Elements

💡 Pro-Tip: Example code for each of these steps is included in the Querying and Training a Model Tutorial.

  1. Import relevant classes from bitfount for your modelling needs.

    • Relevant classes can be found in the API Reference and will depend on the task you are planning to run.
    • For standard use cases, examples of relevant classes are covered in the Tutorials.
  2. Set up the loggers. Loggers enable you to receive input on the progress of your task and details on completion or failure.

  3. Define the model and data structure you will use to train.

  4. Train the model on the desired Pod(s): model.fit(pod_identifiers=[pod-identifier])

    • Note: If training on multiple Pods, ensure the data structures for the Pods are the same. Bitfount currently only supports horizontal federated learning.
    • model.fit automatically chooses the FederatedAveraging protocol; if you would like to specify a different protocol, you can do so and run the model like so:
    protocol = FederatedAveraging(algorithm=FederatedModelTraining(model=model))protocol.run(pod_identifiers=[pod_identifier])
  5. {Optional} Serialise and save the model:

model_out= Path("desired_model_path.pt")model.serialize(model_out)

Using Custom Models

⚠️ WARNING: Bitfount does not vet the contents of custom models. Granting permission to use custom models means a Data Scientist can execute any custom model.

⚠️ WARNING: Saving a custom model to Bitfount currently does not mean the model is private. Any model a Data Scientist saves to the Hub is accessible via its hub.bitfount.com URL, so be sure you are comfortable with your model architecture being publicly accessible prior to upload.

For cases when you wish to train a model that isn’t included natively with the Bitfount SDK, we support custom models. Before using a custom model, you must ensure your Pod permissions allow you to train a custom model. If not, ask the Pod owner to authorise custom models using the instructions in the Authorising Pods guide.

💡 Pro-Tip: A detailed example of running a custom model is available in the Using Custom Models tutorial.

To run a custom model, you need to implement the following methods and save your model to a file:

  • __init__(): how to set up the model
  • configure_optimizers(): how optimizers should be configured in the model
  • forward(): how to perform a forward pass in the model, how the loss is calculated
  • training_step(): what one training step in the model looks like
  • validation_step(): what one validation step in the model looks like
  • test_step(): what one test step in the model looks like

Once you’ve defined these methods, you can follow the steps above to train a custom model using the Bitfount Python API.

Modelling with a YAML file

If you’d prefer not to interact directly with the Python API, we provide a pre-populated model training script called run_modeller, which requires the use of a YAML file to run.

💡 Pro-Tip: A tutorial for model training using the run_modeller script is available at Training a Model Using a YAML Configuration File.

The run_modeller script method for training requires you to prepare a yaml file to define model inputs, which are specified in the ModellerConfig class in the API Reference guide.

For example, training a feedforward neural network on the "bitfount/prosper" Pod can be configured using the following yaml:

pods:    identifiers:        - bitfount/prosperdata:    assign:        target: TARGETtask:    protocol:        name: FederatedAveraging        arguments:            epochs_between_parameter_updates: 1    algorithm:        name: FederatedModelTraining        aggregator:            secure: False    model:        name: PyTorchTabularClassifier        hyperparameters:            epochs: 2            batch_size: 32        optimizer:            name: RAdam            params:                lr: 0.001

Additional Pods can be added by simply adding the relevant identifiers to the "pods" section.

Once the yaml file is configured, you can train the model by executing:

bitfount run_modeller <path_to_config.yaml>

To run a custom model using the YAML file method, you must include a model task in the YAML file as follows:

task:  model:    bitfount_model:      model_ref: <path_to_model_file OR existing_custom_model_name_from_Hub>      username: <username to upload model with OR username of existing custom model owner>

Modelling with Docker

Instead of running the Python script directly, you can also run it via Docker using the Modeller service docker image hosted at GitHub.

The image requires a config.yaml file, which follows the same format as the yaml used by the run_modeller script. By default the docker image will try to load it from /mount/config/config.yaml inside the docker container. You can provide this file in one of two ways:

  1. Mounting/binding a volume to the container. Exactly how you do this may vary depending on your platform/environment (Docker/docker-compose/ECS).
  2. Copy a config file into a stopped container using docker cp.

Once your container is running you will need to check the logs and complete the login step, allowing your container to authenticate with Bitfount. The process is the same as when running locally as you do when going through tutorials, except that we can't open the login page automatically for you.

Model Evaluation

Bitfount also enables remote evaluation of an existing pre-trained model without the need to return the final model output using the evaluate method.

For detailed instructions, please see our Using Pre-Trained Models tutorial.

FAQs & Additional Relevant Tutorials

Ran into errors? Want to do something a bit more advanced?

You may wish to check out the Troubleshooting & FAQs page and explore more advanced model training capabilities via our additional tutorials:

Next Steps

You did it! For more detailed illustrations of the Bitfount product suite, feel free to peruse the User Guide.