From b736356b30f1554175a2c6b54eb00077622129d9 Mon Sep 17 00:00:00 2001
From: kishi
Date: Thu, 16 Nov 2023 04:45:48 +0900
Subject: [PATCH] minor fix
---
basicgym/README.md | 24 +++++++++----------
.../rtb/rtb_synthetic_customize_env_ja.ipynb | 4 ++--
recgym/README.md | 17 +++++++------
rtbgym/README.md | 19 +++++++--------
4 files changed, 31 insertions(+), 33 deletions(-)
diff --git a/basicgym/README.md b/basicgym/README.md
index 3ab6f47..9368aea 100644
--- a/basicgym/README.md
+++ b/basicgym/README.md
@@ -17,9 +17,9 @@
## Overview
-*BasicGym* is an open-source simulation platform for synthetic simulation, which is written in Python. The simulator is particularly intended for reinforcement learning algorithms and follows [OpenAI Gym](https://gym.openai.com) and [Gymnasium](https://gymnasium.farama.org/)-like interface. We design SyntheticGym as a configurative environment so that researchers and practitioners can customize the environmental modules including `StateTransitionFunction` and `RewardFunction`
+*BasicGym* is an open-source simulation platform for synthetic simulation, which is written in Python. The simulator is particularly intended for reinforcement learning algorithms and follows [OpenAI Gym](https://gym.openai.com) and [Gymnasium](https://gymnasium.farama.org/)-like interface. We design BasicGym as a configurative environment so that researchers and practitioners can customize the environmental modules including `StateTransitionFunction` and `RewardFunction`
-Note that SyntheticGym is publicized under [scope-rl](../) repository, which facilitates the implementation of the offline reinforcement learning procedure.
+Note that BasicGym is publicized under [scope-rl](../) repository, which facilitates the implementation of the offline reinforcement learning procedure.
### Basic Setting
@@ -33,21 +33,21 @@ We formulate the following (Partially Observable) Markov Decision Process ((PO)M
### Implementation
-SyntheticGym provides a standardized environment in both discrete and continuous action settings.
+BasicGym provides a standardized environment in both discrete and continuous action settings.
- `"BasicEnv-continuous-v0"`: Standard continuous environment.
- `"BasicEnv-discrete-v0"`: Standard discrete environment.
-SyntheticGym consists of the following environment.
+BasicGym consists of the following environment.
- [BasicEnv](./envs/basic.py#L18): The basic configurative environment.
-SyntheticGym is configurative about the following module.
+BasicGym is configurative about the following module.
- [StateTransitionFunction](./envs/simulator/function.py#L14): Class to define the state transition function.
- [RewardFunction](./envs/simulator/function.py#L101): Class to define the reward function.
Note that users can customize the above modules by following the [abstract class](./envs/simulator/base.py).
## Installation
-SyntheticGym can be installed as a part of [scope-rl](../) using Python's package manager `pip`.
+BasicGym can be installed as a part of [scope-rl](../) using Python's package manager `pip`.
```
pip install scope-rl
```
@@ -64,12 +64,12 @@ python setup.py install
We provide an example usage of the standard and customized environment. \
The online/offline RL and Off-Policy Evaluation examples are provided in [SCOPE-RL's README](../README.md).
-### Standard SyntheticEnv
+### Standard BasicEnv
-Our standard SyntheticEnv is available from `gym.make()`, following the [OpenAI Gym](https://gym.openai.com) and [Gymnasium](https://gymnasium.farama.org/)-like interface.
+Our standard BasicEnv is available from `gym.make()`, following the [OpenAI Gym](https://gym.openai.com) and [Gymnasium](https://gymnasium.farama.org/)-like interface.
```Python
-# import SyntheticGym and gym
+# import BasicGym and gym
import basicgym
import gym
@@ -134,9 +134,9 @@ plt.show()
-Note that while we use [SCOPE-RL](../README.md) and [d3rlpy](https://github.com/takuseno/d3rlpy) here, SyntheticGym is compatible with any other libraries working on the [OpenAI Gym](https://gym.openai.com) and [Gymnasium](https://gymnasium.farama.org/)-like interface.
+Note that while we use [SCOPE-RL](../README.md) and [d3rlpy](https://github.com/takuseno/d3rlpy) here, BasicGym is compatible with any other libraries working on the [OpenAI Gym](https://gym.openai.com) and [Gymnasium](https://gymnasium.farama.org/)-like interface.
-### Customized SyntheticEnv
+### Customized BasicEnv
Next, we describe how to customize the environment by instantiating the environment.
@@ -257,7 +257,7 @@ Bibtex:
## Contribution
-Any contributions to SyntheticGym are more than welcome!
+Any contributions to BasicGym are more than welcome!
Please refer to [CONTRIBUTING.md](../CONTRIBUTING.md) for general guidelines on how to contribute the project.
## License
diff --git a/examples/quickstart_ja/rtb/rtb_synthetic_customize_env_ja.ipynb b/examples/quickstart_ja/rtb/rtb_synthetic_customize_env_ja.ipynb
index 5baacdc..2d5b30e 100644
--- a/examples/quickstart_ja/rtb/rtb_synthetic_customize_env_ja.ipynb
+++ b/examples/quickstart_ja/rtb/rtb_synthetic_customize_env_ja.ipynb
@@ -724,7 +724,7 @@
"\n",
"@dataclass\n",
"class CustomizedWinningPriceDistribution(BaseWinningPriceDistribution):\n",
- " \"\"\"Initialization.\"\"\"\n",
+ " \"\"\"初期化.\"\"\"\n",
" n_ads: int\n",
" n_users: int\n",
" ad_feature_dim: int\n",
@@ -746,7 +746,7 @@
" bid_prices: np.ndarray,\n",
" **kwargs,\n",
" ) -> Tuple[np.ndarray]:\n",
- " \"\"\"各オークションのインプレッションとセカンドプライスを確率的に決定する..\"\"\"\n",
+ " \"\"\"各オークションのインプレッションとセカンドプライスを確率的に決定する.\"\"\"\n",
" # 単純な正規分布からの落札価格のサンプリング\n",
" winning_prices = self.random_.normal(\n",
" loc=self.standard_bid_price,\n",
diff --git a/recgym/README.md b/recgym/README.md
index 1cde5d9..09fe88c 100644
--- a/recgym/README.md
+++ b/recgym/README.md
@@ -226,18 +226,17 @@ The statistics of the environment is also visualized at [quickstart/rec/rec_synt
If you use our software in your work, please cite our paper:
-Haruka Kiyohara, Kosuke Kawakami, Yuta Saito.
-**Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation**
-(RecSys'21 SimuRec workshop)
-[https://arxiv.org/abs/2109.08331](https://arxiv.org/abs/2109.08331)
+Haruka Kiyohara, Ren Kishimoto, Kosuke Kawakami, Ken Kobayashi, Kazuhide Nakata, Yuta Saito.
+**SCOPE-RL: A Python Library for Offline Reinforcement Learning, Off-Policy Evaluation, and Policy Selection**
+[link]() (a preprint coming soon..)
Bibtex:
```
-@article{kiyohara2021accelerating,
- title={Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation},
- author={Kiyohara, Haruka and Kawakami, Kosuke and Saito, Yuta},
- journal={arXiv preprint arXiv:2109.08331},
- year={2021}
+@article{kiyohara2023towards,
+ author = {Kiyohara, Haruka and Kishimoto, Ren and Kawakami, Kosuke and Kobayashi, Ken and Nataka, Kazuhide and Saito, Yuta},
+ title = {SCOPE-RL: A Python Library for Offline Reinforcement Learning, Off-Policy Evaluation, and Policy Selection},
+ journal={arXiv preprint arXiv:23xx.xxxxx},
+ year = {2023},
}
```
diff --git a/rtbgym/README.md b/rtbgym/README.md
index f67296f..c086d04 100644
--- a/rtbgym/README.md
+++ b/rtbgym/README.md
@@ -166,7 +166,7 @@ plt.show()
Note that while we use [SCOPE-RL](../README.md) and [d3rlpy](https://github.com/takuseno/d3rlpy) here, RTBGym is compatible with any other libraries working on the [OpenAI Gym](https://gym.openai.com) and [Gymnasium](https://gymnasium.farama.org/)-like interface.
-### Customized RTGEnv
+### Customized RTBEnv
Next, we describe how to customize the environment by instantiating the environment.
@@ -359,18 +359,17 @@ Finally, example usages for online/offline RL and OPE/OPS studies are available
If you use our software in your work, please cite our paper:
-Haruka Kiyohara, Kosuke Kawakami, Yuta Saito.
-**Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation**
-(RecSys'21 SimuRec workshop)
-[https://arxiv.org/abs/2109.08331](https://arxiv.org/abs/2109.08331)
+Haruka Kiyohara, Ren Kishimoto, Kosuke Kawakami, Ken Kobayashi, Kazuhide Nakata, Yuta Saito.
+**SCOPE-RL: A Python Library for Offline Reinforcement Learning, Off-Policy Evaluation, and Policy Selection**
+[link]() (a preprint coming soon..)
Bibtex:
```
-@article{kiyohara2021accelerating,
- title={Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation},
- author={Kiyohara, Haruka and Kawakami, Kosuke and Saito, Yuta},
- journal={arXiv preprint arXiv:2109.08331},
- year={2021}
+@article{kiyohara2023towards,
+ author = {Kiyohara, Haruka and Kishimoto, Ren and Kawakami, Kosuke and Kobayashi, Ken and Nataka, Kazuhide and Saito, Yuta},
+ title = {SCOPE-RL: A Python Library for Offline Reinforcement Learning, Off-Policy Evaluation, and Policy Selection},
+ journal={arXiv preprint arXiv:23xx.xxxxx},
+ year = {2023},
}
```