Vision

Vision

We build products to control and optimize compute infrastructure.

RL

TensorForce

Our production-grade open source reinforcement learning library.

Kubernetes

Kubernetes deployment

Deploy, train and serve TensorForce models.

Vision

Compute capacity drives everything we do. Making the most out of your devices is critical to reduce server cost and improve application performance. Today, the driving trend in software is containerization, which has made it simpler than ever to package, deploy and run software anywhere. Organizations must manage growing numbers of on-premise, multi-cloud and edge workloads. We are building products to optimise container workflows using deep reinforcement learning.

TensorForce

TensorForce is an open source reinforcement learning library on top of TensorFlow focused on providing clear APIs, readability and modularisation to deploy reinforcement learning solutions both in research and practice.

from tensorforce import Configuration
from tensorforce.agents import TRPOAgent
from tensorforce.core.networks import layered_network_builder

config = Configuration(
  batch_size=100,
  state=dict(shape=(10,)),
  actions=dict(continuous=False, num_actions=2),
  network=layered_network_builder([dict(type='dense', size=50), dict(type='dense', size=50)])
)

# Create a Trust Region Policy Optimization agent
agent = TRPOAgent(config=config)

# Get new data from somewhere, e.g. a client to a web app
client = MyClient('http://127.0.0.1', 8080)

# Poll new state from client
state = client.get_state()

# Get prediction from agent, execute
action = agent.act(state=state)
reward = client.execute(action)

# Add experience, agent automatically updates model according to batch size
agent.observe(reward=reward, terminal=False)

Kubernetes deployment

Register your interest for our upcoming Kubernetes integration of TensorForce. Simplified containerized deployment, training and serving of RL models on open source container management.

Stay up to date

Register your interest for our upcoming Kubernetes integration and very (!) occasional updates on our progress.