SLM-Env
Build Unity environment binaries for SLM-Lab and release on npm for easy distribution.
To use a prebuilt environment, just add its npm package, e.g. yarn add slm-lab-3dball
.
Installation
Building a binary requires 4 things:
- Node.js with
npm
- the Unity editor, installed via Unity Hub. Go to
Unity Hub > Installs > Editor > Add Modules > Linux Build Support
to enable Linux builds. - ml-agents repo with the environment's Unity assets:
git clone https://github.com/Unity-Technologies/ml-agents.git
- this repo:
git clone https://github.com/kengz/SLM-Env.git
Build a Unity Environment binary
The goal is to build MacOSX and Ubuntu binaries that can be used in ml-agents
's gym API. Currently this also means restriction to using only non-vector environments.
In this example, we will use the Walker environment. We also recommend first going through the Unity Hub tutorial to get a basic knowledge about the editor. Reference from here.
-
Open the
ml-agents/UnitySDK
folder in the Unity editor. -
In the Assets tab, find Walker under
ML-Agents > Examples > Walker > Scenes > Walker
. Hit the play button to preview it. -
Make any necessary asset changes:
-
to enable programmatic control, go to
WalkerAcademy
and checkcontrol
in the Inspector tab. -
since we're not supporting vector environments, remove the extra walker clones but selecting all but the first
WalkerPair
game objects unchecking them in the Inspector tab. -
next, open the asset
Walker > Brains > WalkerLearning
and in the Inspector tab, changeVector Observation > Stacked Vectors
to 1. Also, click on Model and delete it so we don't include the pretrained TF weights.
Go to Edit > Project Settings > Player > Resolution and Presentation
. Ensure Run in Background (checked)
and Display Resolution Dialog (Disabled)
.
-
Now we're ready to build the binaries. Go to
File > Build Settings
: -
click
Add Open Scenes
and add your scene -
click
Player Settings
to show the Inspector tab. CheckRun in Background
, setDisplay Resolution Dialog
to 'Disabled'. Optionally, setFullscreen Mode
to 'Windowed'. -
build one for Mac OS X. Hit
Build and Run
to render immediately after building. Choose the directorySLM-Env/bin/
and use the nameunitywalker-v0
. -
build one for Linux. Hit
Build
, and use the same directory and name. -
Test the binary. First ensure you have the
mlagents_envs
(version0.9.2
) andgym_unity
pip packages installed from ml-agents. Use the following script to run an example control loop:
from gym_unity.envs import UnityEnv env = state = for i in : action = state, reward, done, info =
The binary is now ready. Next, release it to npm
.
Release
Note: use kebab-case naming convention with prefix
slm-env
and OpenAI gym convention, soslm-env-unitywalker-v0
- Open up
package.json
and update:
- replace
envname
as appropriate:"name": "slm-env-unitywalker-v0",
- update version
-
Copy both the MacOSX and Linux binary files from
bin/
tobuild/
-
Release to
npm
(make sure you are logged in first, bynpm login
):
npm publish
Since the binaries are huge, npm
will throw an error near the end of it. Just ignore that.
npm ERR! registry error parsing jsonnpm ERR! publish Failed PUT 403npm ERR! code E403npm ERR! You cannot publish over the previously published version 1.0.0. : slm-env-unitywalker-v0
It should be available on npmjs.com, just search for your package slm-env-unitywalker-v0
.
- Add the release to
SLM-Lab
for usage:yarn add slm-env-3dball