end0tknr's kipple - web写経開発

太宰府天満宮の狛犬って、妙にカワイイ

Unity ML-Agents による はじめての強化学習

「Unity ML-Agents 実践ゲームプログラミング v1.1対応版」の 2章の写経です。

今回、この書籍より新しい 「ML-Agents Release19 + unity 2021.3 + python3.8 on windows 11」 を使用している為、書籍に記載されるsample codeと異なる点があります。

create unity 3D core project & install mlagents to unity

先日のentryの「4. install mlagents to unity」から 「5-1. 学習の準備」を行って下さい。

ただし、「C:\Users\end0t\tmp\ml-agents_19\Project\Assets\ML-Agents」の コピーは不要です。

ML-Agents Release19 + unity 2021.3 + python3.8 on windows 11 の環境作成 - end0tknr's kipple - web写経開発

Main Camera の位置修正

Materialの作成

「Project欄 → Create → Material」から、以下の3色を作成して下さい。

name Main Maps→Albedo
Gray RGB = 168,168,168
Brown RGB = 212,154,33
Blue RGB = 0,35,255

Floorの作成

「Hierarchy欄 → 3D Object → Plane」から、 Inspector欄で以下のように設定して下さい。

Targetの作成

「Hierarchy欄 → 3D Object → Cube」から、 Inspector欄で以下のように設定して下さい。

Sphere ( RollerAgent )の作成

「Hierarchy欄 → 3D Object → Sphere」から、 Inspector欄で以下のように設定して下さい。

RollerAgent へ Rigidbody 追加

RollerAgent を選択した状態で、「Add Component」をクリックし 「Rigidbody」を追加して下さい

RollerAgent へ Behavior Parameters 追加

RollerAgent を選択した状態で、「Add Component」をクリックし 「Behavior Parameters」を追加後、以下のように設定して下さい。

RollerAgent へ C# script 追加

RollerAgent を選択した状態で、「Add Component」をクリックし 「New Script」を「RollerAgent」として追加後、 以下のように実装して下さい。

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
using Unity.MLAgents.Actuators;

public class RollerAgent : Agent
{
    public Transform target;
    Rigidbody rBody;

    // 初期化時に呼ばれる
    public override void Initialize()
    {
        this.rBody = GetComponent<Rigidbody>();
    }

    // エピソード開始時に呼ばれる
    public override void OnEpisodeBegin()
    {
        // RollerAgentが床から落下している時
        if (this.transform.localPosition.y < 0)
        {
            // RollerAgentの位置と速度をリセット
            this.rBody.angularVelocity = Vector3.zero;
            this.rBody.velocity = Vector3.zero;
            this.transform.localPosition = new Vector3(0.0f, 0.5f, 0.0f);
        }

        // Targetの位置のリセット
        target.localPosition = new Vector3(
            Random.value * 8 - 4, 0.5f, Random.value * 8 - 4);
    }

    // 観察取得時に呼ばれる
    public override void CollectObservations(VectorSensor sensor)
    {
        sensor.AddObservation(target.localPosition); //TargetのXYZ座標
        sensor.AddObservation(this.transform.localPosition); //RollerAgentのXYZ座標
        sensor.AddObservation(rBody.velocity.x); // RollerAgentのX速度
        sensor.AddObservation(rBody.velocity.z); // RollerAgentのZ速度
    }

    // 行動実行時に呼ばれる
    public override void OnActionReceived(ActionBuffers actionBuffers)
    {
        // RollerAgentに力を加える
        Vector3 controlSignal = Vector3.zero;
        controlSignal.x = actionBuffers.ContinuousActions[0];
        controlSignal.z = actionBuffers.ContinuousActions[1];
        rBody.AddForce(controlSignal * 10);

        // RollerAgentがTargetの位置に到着した時
        float distanceToTarget = Vector3.Distance(
            this.transform.localPosition, target.localPosition);
        if (distanceToTarget < 1.42f)
        {
            AddReward(1.0f);
            EndEpisode();
        }

        // RollerAgentが床から落下した時
        if (this.transform.localPosition.y < 0)
        {
            EndEpisode();
        }
    }

    // ヒューリスティックモードの行動決定時に呼ばれる
    public override void Heuristic(in ActionBuffers actionsOut)
    {
        var continuousActionsOut = actionsOut.ContinuousActions;
        continuousActionsOut[0] = Input.GetAxis("Horizontal");
        continuousActionsOut[1] = Input.GetAxis("Vertical");
    }

    //    // Start is called before the first frame update
    //    void Start()
    //    {
    //    }

    //    // Update is called once per frame
    //    void Update()
    //    {
    //    }
}

C# script の設定変更

先程の c# script実装後、inspector欄で以下のように設定して下さい

RollerAgent へ Decision Requester 追加

RollerAgent を選択した状態で、「Add Component」をクリックし 「Decision Requester」を追加後、以下のように設定して下さい

Behavior Parameters の設定変更

ここで、先程、追加した Behavior Parameters の設定変更を行います。

強化学習の実行

behaviors:
  RollerBall:
    trainer_type: ppo
    hyperparameters:
      batch_size: 10
      buffer_size: 100
      learning_rate: 0.0003
      beta: 0.005
      epsilon: 0.2
      lambd: 0.95
      num_epoch: 3
      learning_rate_schedule: linear
    network_settings:
      normalize: true
      hidden_units: 128
      num_layers: 2
      vis_encode_type: simple
    reward_signals:
      extrinsic:
        gamma: 0.99
        strength: 1.0
    keep_checkpoints: 5
    checkpoint_interval: 500000
    max_steps: 500000
    time_horizon: 64
    summary_freq: 1000
    threaded: true

上記のyamlを作成した上で、以下のコマンドを実行し、 その後、unityの再生ボタン(▶)をクリックすると、強化学習が開始されます。

学習開始後、ログが表示されますが、Mean Reward = 1.0 に達すると 十分ですので、Ctrl-Cで停止して下さい。

(ml_agents) C:\Users\end0t\tmp>mlagents-learn RollerBall.yaml
            ┐  ╖
      ╓╖╬|╡  ||╬╖╖
    ╓╖╬|||||┘  ╬|||||╬╖
 ╖╬|||||╬╜        ╙╬|||||╖╖                               ╗╗╗
 ╬╬╬╬╖||╦╖        ╖╬||╗╣╣╣╬      ╟╣╣╬    ╟╣╣╣             ╜╜╜  ╟╣╣
 ╬╬╬╬╬╬╬╬╖|╬╖╖╓╬╪|╓╣╣╣╣╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╒╣╣╖╗╣╣╣╗   ╣╣╣ ╣╣╣╣╣╣ ╟╣╣╖   ╣╣╣
 ╬╬╬╬┐  ╙╬╬╬╬|╓╣╣╣╝╜ ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╣╙ ╙╣╣╣  ╣╣╣ ╙╟╣╣╜╙  ╫╣╣  ╟╣╣
 ╬╬╬╬┐     ╙╬╬╣╣     ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣     ╣╣╣┌╣╣╜
 ╬╬╬╜       ╬╬╣╣      ╙╝╣╣╬      ╙╣╣╣╗╖╓╗╣╣╣╜ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣╦╓    ╣╣╣╣╣
 ╙   ╓╦╖    ╬╬╣╣   ╓╗╗╖            ╙╝╣╣╣╣╝╜   ╘╝╝╜   ╝╝╝  ╝╝╝   ╙╣╣╣    ╟╣╣╣
   ╩╬╬╬╬╬╬╦╦╬╬╣╣╗╣╣╣╣╣╣╣╝                                             ╫╣╣╣╣
      ╙╬╬╬╬╬╬╬╣╣╣╣╣╣╝╜
          ╙╬╬╬╣╣╣╜
             ╙
 Version information:
  ml-agents: 0.28.0,
  ml-agents-envs: 0.28.0,
  Communicator API: 1.5.0,
  PyTorch: 1.9.1+cu111
[INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor.
[INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0
[INFO] Connected new brain: RollerBall?team=0
[INFO] Hyperparameters for behavior name RollerBall:
        trainer_type:   ppo
        hyperparameters:
          batch_size:   10
          buffer_size:  100
          learning_rate:        0.0003
          beta: 0.005
          epsilon:      0.2
          lambd:        0.95
          num_epoch:    3
          learning_rate_schedule:       linear
          beta_schedule:        linear
          epsilon_schedule:     linear
        network_settings:
          normalize:    True
          hidden_units: 128
          num_layers:   2
          vis_encode_type:      simple
          memory:       None
          goal_conditioning_type:       hyper
          deterministic:        False
        reward_signals:
          extrinsic:
            gamma:      0.99
            strength:   1.0
            network_settings:
              normalize:        False
              hidden_units:     128
              num_layers:       2
              vis_encode_type:  simple
              memory:   None
              goal_conditioning_type:   hyper
              deterministic:    False
        init_path:      None
        keep_checkpoints:       5
        checkpoint_interval:    500000
        max_steps:      500000
        time_horizon:   64
        summary_freq:   1000
        threaded:       True
        self_play:      None
        behavioral_cloning:     None
[INFO] RollerBall. Step: 1000. Time Elapsed: 22.908 s. Mean Reward: 0.286. Std of Reward: 0.452. Training.
[INFO] RollerBall. Step: 2000. Time Elapsed: 35.768 s. Mean Reward: 0.294. Std of Reward: 0.456. Training.
<略>
[INFO] RollerBall. Step: 3000. Time Elapsed: 49.964 s. Mean Reward: 0.371. Std of Reward: 0.483. Training.
[INFO] RollerBall. Step: 26000. Time Elapsed: 469.437 s. Mean Reward: 0.992. Std of Reward: 0.089. Training.
[INFO] RollerBall. Step: 27000. Time Elapsed: 490.284 s. Mean Reward: 1.000. Std of Reward: 0.000. Training.
[INFO] RollerBall. Step: 28000. Time Elapsed: 510.507 s. Mean Reward: 0.985. Std of Reward: 0.122. Training.
[INFO] RollerBall. Step: 29000. Time Elapsed: 528.527 s. Mean Reward: 1.000. Std of Reward: 0.000. Training.
[INFO] RollerBall. Step: 30000. Time Elapsed: 546.339 s. Mean Reward: 0.993. Std of Reward: 0.086. Training.
[INFO] RollerBall. Step: 31000. Time Elapsed: 566.482 s. Mean Reward: 0.985. Std of Reward: 0.121. Training.
[INFO] RollerBall. Step: 32000. Time Elapsed: 589.243 s. Mean Reward: 1.000. Std of Reward: 0.000. Training.
[INFO] Learning was interrupted. Please wait while the graph is generated.
[INFO] Exported results\ppo\RollerBall\RollerBall-32329.onnx
[INFO] Copied results\ppo\RollerBall\RollerBall-32329.onnx to results\ppo\RollerBall.onnx.

(ml_agents) C:\Users\end0t\tmp>

学習結果の検証

先程の学習で、results フォルダが作成され、 その中に RollerBall.onnx が作成されますので、 これを unity の Assets にコピーし、 Behavior Parameters も設定して下さい。

最後に、unityの再生ボタン(▶)をクリックすると、 RollerAgentsが、Targetを追いかける様子を確認できます。

Pillow for python による Affine変換 (拡大縮小,移動,回転,せん断)

Affine層とSoftmax-with-Loss層の計算グラフとnumpy for python実装 - end0tknr's kipple - web写経開発

上記のentryでは、ニューラルネットワークのアフィン層を使用していますが、 今回は、画像処理のAffine変換。

どうやら、Affine変換とは、ニューラルネットワークでも 画像処理でも、以下のような行列式を指すみたい。

 \large{
\begin{pmatrix} x' \\\ y' \end{pmatrix} =

\begin{pmatrix} a b \\\ c d \end{pmatrix}
\begin{pmatrix} x \\\ y \end{pmatrix} +
\begin{pmatrix} e \\\ f \end{pmatrix}
}

また、上記は、以下のようにも表現できます。

 \large{
\begin{pmatrix} x' \\\ y' \\\ 1 \end{pmatrix} =

\begin{pmatrix} a b e \\\ c d f \\\ 0 0 1 \end{pmatrix}
\begin{pmatrix} x \\\ y \\\ 1 \end{pmatrix}
}

画像の拡大縮小等を行う場合、 上記のa ~ fは以下のようになります。

{ 拡大縮小 \begin{pmatrix} S_x 0 0 \\\ 0 S_y 0 \\\ 0 0 1 \end{pmatrix}}
{ 平行移動 \begin{pmatrix} 0 0 T_x \\\ 0 0 T_y \\\ 0 0 1 \end{pmatrix}}
{ 回  転 \begin{pmatrix} cosθ -sinθ 0 \\\ sinθ cosθ 0\\\ 0  0  1\end{pmatrix}}
{ せん断 \begin{pmatrix} 1 tanθ_x 0 \\\ tanθ_y 1 0 \\\ 0 0 1 \end{pmatrix}}

これをpythonで実装すると、以下の通りです

# coding: utf-8

from PIL import Image
import numpy as np

def main():
    img_file_path = "./marble.png"
    img = Image.open(img_file_path)
    
    # アフィン変換行列
    affine_tuple = (0.5, 0,   0,
                    0,   0.5, 0,
                    0,   0,   1)
    # Image.AFFINE は、Pillow v.9.1から 
    # Image.Transform.AFFINE になっているようです
    # https://pillow.readthedocs.io/en/latest/releasenotes/9.1.0.html
    new_img = img.transform(
        img.size,
        Image.AFFINE,
        #Image.Transform.AFFINE,
        affine_tuple )
    new_img.show()

if __name__ == '__main__':
    main()

Pillow & numpy for python による画像処理

メモ。詳細は、python script内のコメントをご覧ください

# coding: utf-8

from PIL import Image
import numpy as np

def main():
    img_file_path = "./marble.png"
    img = Image.open(img_file_path)
    
    change_color_mode( img )    # カラー/モノクロモード変換
    get_set_color( img )        # カラー参照/設定
    get_img_size( img )         # サイズ情報
    resize_img( img )           #   〃  変更
    rotate_img( img )           # 回転
    paste_img( img )            # 貼り付け

    make_new_img()              # 画像の新規作成
    from_to_bytes( img )        # 画像<->byte列変換
    from_to_numpy( img )        # 画像<->byte列変換 (numpy版)

def from_to_numpy( img ):
    img_numpy = np.array( img )
    print( img_numpy.size, img_numpy.shape )

    new_img = Image.fromarray(img_numpy)
    new_img.show()
    
def from_to_bytes( img ):
    img_size = img.size
    print( img_size[0] * img_size[1] * 3 )
    # byte列へ
    img_bytes = img.tobytes()
    print( len( img_bytes ) )
    
    # byte列から画像へ
    new_img = Image.frombytes("RGB",img_size, img_bytes)
    new_img.show()

    
def make_new_img():
    # 単一色
    colors = bytes( [176,224,230]*256*256 )
    new_img = Image.frombytes(
        "RGB", # L:グレースケール、RGB:カラー
        (256,256),
        colors )
    new_img.show()
    
    # グラデーション
    colors_ba = bytearray( [255]*256*256*3 )
    for i in range(1, len(colors_ba), 3):
        colors_ba[i] = (i // 3) % 256
    new_img = Image.frombytes(
        "RGB", # L:グレースケール、RGB:カラー
        (256,256),
        bytes(colors_ba) )
    new_img.show()

def get_set_color( img ):
    for x in range( 0, img.height ):
        for y in range( 0, img.width ):
            pixel = img.getpixel( (x,y) )
            print( pixel )
            #img.putpixel((x,y),(pixel[2],pixel[1],pixel[0]) )
    
def rotate_img( img ):
    new_img = img.rotate(30)
    new_img.show()
    
    new_img = img.rotate(
        -30,
        expand=True,
        center   =(0, 60),      # 回転中心
        translate=(50,50),      # 平行移動
        fillcolor=(255, 128, 0),# 外側の色
        # NEAREST(default),BICUBIC(細部きれい)
        resample=Image.Resampling.BICUBIC
    )
    new_img.show()
    
def resize_img( img ):
    # trim
    trim_coodrs = (50,50,150,150) # 左端, 上端, 右端, 下端
    new_img = img.crop( trim_coodrs )
    new_img.show()
    
    # 拡大/縮小
    new_img = img.resize( (150,150) )
    new_img.show()

    new_img = img.resize(
        (150,150),
        # NEAREST(default),LANCZOS(きれい)
        resample=Image.Resampling.LANCZOS
    )
    new_img.show()
    
    
def get_img_size( img ):
    print(img.height, img.width, img.size ) # W, H, (W,H)
    
    print(np.array( img ).shape ) # W x H x Channel by numpy

def change_color_mode( img ):
    color_modes = [["1","1_bit_pixels"],        # 白黒
                   ["L","8-bit-grayscale"],     #   〃
                   ["P","8-bit-colors"]]        # カラー
    
    for color_mode in color_modes:
        new_img = img.convert( color_mode[0] )
        # new_img.save(color_mode[0] +".png")
        new_img.show()

if __name__ == '__main__':
    main()

ML-Agents Release19 + unity 2021.3 + python3.8 on windows 11 の環境作成

【ML-Agents Release 17 環境構築 2021.5 -Windows】【強化学習でAIを避難させる】 #2 -Unity ML-Agents - Qiita

上記urlを参考に ML-Agents Release19 + unity 2021.3 + python3.8 の環境を作成します。

※ 2022/9時点で、最新のpythonは3.10ですが、 mlagents-learn が動作しなかった為、python 3.8を使用しています

※ 今回は参考にしていませんが、以下のようなyoutubeもあるようです

www.youtube.com

目次

0. install anaconda for win

phttps://www.anaconda.com/ から、インストーラでである Anaconda3-2022.05-Windows-x86_64.exe をダウンロード & 実行

1. ml-agents release 19 branch を download

https://github.com/Unity-Technologies/ml-agents/tree/release_19_branch

から、ml-agents-release_19_branch.zip をダウンロードし、 c:/Users/end0t/tmp/ml-agents_19 として解凍。

ちなみに、「git clone」コマンドで、同様のことを行う場合、以下のようです。

$ git clone --branch release_19 https://github.com/Unity-Technologies/ml-agents.git

2. anacondaの仮想環境 作成

Anaconda Prompt から、以下を実行

(base) C:\Users\end0t> conda create --name ml_agents19 python=3.8.13 anaconda
(base) C:\Users\end0t> activate ml_agents19

3. pip install mlagents & pytorch

(ml_agents19) C:\Users\end0t> cd tmp\ml-agents_19\ml-agents-envs
(ml_agents19) C:\Users\end0t\tmp\ml-agents_19\ml-agents-envs> pip install -e .
(ml_agents19) C:\Users\end0t\tmp\ml-agents_19\ml-agents-envs> cd ..\ml-agents
(ml_agents19) C:\Users\end0t\tmp\ml-agents_19\ml-agents> pip install -e .

Anaconda Prompt から上記を実行し、その後、 以下のpip freeze コマンドで、installされたことを確認

(ml_agents19) C:\Users\end0t\tmp\ml-agents_19\ml-agents>pip freeze
   :
# Editable install with no version control (mlagents==0.28.0)
-e c:\users\end0t\tmp\ml-agents_19\ml-agents
# Editable install with no version control (mlagents-envs==0.28.0)
-e c:\users\end0t\tmp\ml-agents_19\ml-agents-envs

次に、以下でPyTorchをinstall

(ml_agents19) C:\Users\end0t\tmp\ml-agents_19>
  pip install torch==1.9.1 -f https://download.pytorch.org/whl/torch_stable.html

4. install mlagents to unity

unity hub から 3dプロジェクトを作成

「menuバー → window → package manager」から 「+ → add package from disk」を選択し、file選択ダイアログを表示

表示されたfile選択ダイアログへ、以下の2つを指定し、install

  • C:\Users\end0t\tmp\ml-agents_19\com.unity.ml-agents\package.json
  • C:\Users\end0t\tmp\ml-agents_19\com.unity.ml-agents.extensionss\package.json

以下は、インストール後のpackage manager画面

「menuバー → file → build settings」を選択し、 更に「player settings」から「api compatibility level=.net fwamework」を設定。

※ 参考にさせて頂いた 【ML-Agents Release 17 環境構築 2021.5 -Windows】【強化学習でAIを避難させる】 #2 -Unity ML-Agents - Qiitaでは 「api compatibility level=.net 4.x」となっていましたが、 私の環境である win11では「.net 4.x」が表示されませんでした

5-1. 学習の準備

package manager から input system 1.3をinstall。 この際、warningsダイアログが表示されますが、「yes」をclick

unityを再起動後、player設定から、「active input handling = both」に設定

c:/Users/end0t/MyMlAgents19/Packages/manifest.json を editorで開き、 "com.unity.nuget.newtonsoft-json": "2.0.0" を追加

C:\Users\end0t\tmp\ml-agents_19\Project\Assets\ML-Agents のフォルダを Unity の Assets へ、ドラッグすることで コピー

この状態で playボタン(▶)をclickすると、動作を確認できます。

5-2. 学習実行

以下の mlagents-learnコマンドにより results\test3DBall\3DBall.onnx.というfileが作成されます

(ml_agents19) C:\Users\end0t\tmp\ml-agents_19>mlagents-learn .\config\ppo\3DBall.yaml --run-id=test3DBall
C:\Users\end0t\anaconda3\envs\ml_agents19\lib\site-packages\torch\cuda\__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at  ..\c10\cuda\CUDAFunctions.cpp:115.)
  return torch._C._cuda_getDeviceCount() > 0

            ┐  ╖
      ╓╖╬|╡  ||╬╖╖
    ╓╖╬|||||┘  ╬|||||╬╖
 ╖╬|||||╬╜        ╙╬|||||╖╖                               ╗╗╗
 ╬╬╬╬╖||╦╖        ╖╬||╗╣╣╣╬      ╟╣╣╬    ╟╣╣╣             ╜╜╜  ╟╣╣
 ╬╬╬╬╬╬╬╬╖|╬╖╖╓╬╪|╓╣╣╣╣╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╒╣╣╖╗╣╣╣╗   ╣╣╣ ╣╣╣╣╣╣ ╟╣╣╖   ╣╣╣
 ╬╬╬╬┐  ╙╬╬╬╬|╓╣╣╣╝╜ ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╣╙ ╙╣╣╣  ╣╣╣ ╙╟╣╣╜╙  ╫╣╣  ╟╣╣
 ╬╬╬╬┐     ╙╬╬╣╣     ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣     ╣╣╣┌╣╣╜
 ╬╬╬╜       ╬╬╣╣      ╙╝╣╣╬      ╙╣╣╣╗╖╓╗╣╣╣╜ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣╦╓    ╣╣╣╣╣
 ╙   ╓╦╖    ╬╬╣╣   ╓╗╗╖            ╙╝╣╣╣╣╝╜   ╘╝╝╜   ╝╝╝  ╝╝╝   ╙╣╣╣    ╟╣╣╣
   ╩╬╬╬╬╬╬╦╦╬╬╣╣╗╣╣╣╣╣╣╣╝                                             ╫╣╣╣╣
      ╙╬╬╬╬╬╬╬╣╣╣╣╣╣╝╜
          ╙╬╬╬╣╣╣╜
             ╙

 Version information:
  ml-agents: 0.28.0,
  ml-agents-envs: 0.28.0,
  Communicator API: 1.5.0,
  PyTorch: 1.9.1+cu111
C:\Users\end0t\anaconda3\envs\ml_agents19\lib\site-packages\torch\cuda\__init__.py:52: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at  ..\c10\cuda\CUDAFunctions.cpp:115.)
  return torch._C._cuda_getDeviceCount() > 0
[INFO] Listening on port 5004. Start training by pressing the Play button in the Unity Editor.
[INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0
[INFO] Connected new brain: 3DBall?team=0
[INFO] Hyperparameters for behavior name 3DBall:
  :      :      :      :
[INFO] 3DBall. Step: 12000. Time Elapsed: 51.622 s. Mean Reward: 1.135. Std of Reward: 0.736. Training.
  :      :      :      :
[INFO] 3DBall. Step: 492000. Time Elapsed: 1344.868 s. Mean Reward: 100.000. Std of Reward: 0.000. Training.
[INFO] Exported results\test3DBall\3DBall\3DBall-499181.onnx
[INFO] Exported results\test3DBall\3DBall\3DBall-500181.onnx
[INFO] Copied results\test3DBall\3DBall\3DBall-500181.onnx to results\test3DBall\3DBall.onnx.

(ml_agents19) C:\Users\end0t\tmp\ml-agents_19>

5-2. tensorboard による学習結果確認

「tensorboard --logdir results」を実行後、 ブラウザで http://localhost:6006/ を開いてください

(ml_agents19) C:\Users\end0t\tmp\ml-agents_19>tensorboard --logdir results
TensorFlow installation not found - running with reduced feature set.
Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
TensorBoard 2.10.0 at http://localhost:6006/ (Press CTRL+C to quit)

線形代数 振り返り - 交換法則、結合法則、分配法則、指数法則

https://oguemon.com/study/linear-algebra/matrix-notice/

上記urlの写経です

一覧

|法則名 |和|積|指数| |交換 |○|△| - | |結合 |○|○| - | |分配 |○|○| - | |指数 |-|-| ○ |

交換法則

和の場合、成立します

\large{
\textbf{A} + \textbf{B} = \textbf{B} + \textbf{A}
}

ただし、積の場合、必ずしも成立しません。以下、例

\large{
  \begin{pmatrix}1 0 \\ 1 1 \end{pmatrix}
  \begin{pmatrix}1 1 \\ 0 1 \end{pmatrix}
  = \begin{pmatrix}1\cdot 1+0 \cdot 0 1 \cdot 1+0 \cdot 1 \\
                   1\cdot 1+1 \cdot 0 1 \cdot 1+1 \cdot 1 \end{pmatrix}
  = \begin{pmatrix}1 1 \\ 1 2 \end{pmatrix}
} \large{
  \begin{pmatrix}1 1 \\ 0 1 \end{pmatrix}
  \begin{pmatrix}1 0 \\ 1 1 \end{pmatrix}
  = \begin{pmatrix}1\cdot 1+1 \cdot 1 1 \cdot 0+1 \cdot 1 \\
                   0\cdot 1+1 \cdot 1 0 \cdot 0+1 \cdot 1 \end{pmatrix}
  = \begin{pmatrix}2 1 \\ 1 1 \end{pmatrix}
}

結合法則

\large{
 (\textbf{A} + \textbf{B}) + \textbf{C} =
  \textbf{A} + (\textbf{B}  + \textbf{C} )
}

\large{
 (\textbf{A} \textbf{B}) \textbf{C} =
  \textbf{A} (\textbf{B} \textbf{C} )
}

分配法則

\large{
 \textbf{A} (\textbf{B} + \textbf{C} ) =
  \textbf{A} \textbf{B}  + \textbf{A} \textbf{C}
}

\large{
 (\textbf{A} + \textbf{B} ) \textbf{C} =
  \textbf{A} \textbf{C}  + \textbf{B} \textbf{C}
}

指数法則

\large{
 \textbf{A}^n \textbf{A}^m =  \textbf{A}^{n+m}
}

\large{
 ( \textbf{A}^n )^m =  \textbf{A}^{nm}
}

線形代数 振り返り - 線形代数とは

http://www.math.kanagawa-u.ac.jp/mine/linear_alg/linear_alg_2017_02_28.pdf

上記pdfの p.8~からの写経

数学三大分野

用語 説明
代数学 四則演算の技法を高める学問
解析学 微分,積分,複素数等、極限,収束を扱う
幾何学 空間図形を扱う

微分積分

用語 説明
微分 曲線の傾き、または局所的な変化量
積分 面積や体積に相当。これまでの積み重ね

線形代数

用語 説明
線形代数 線形写像(線形関数)を分析するための理論

線形性

\large{
f(x) = px としたとき \\
f(ax + by) = p(ax + by) = a(px) + b(py) = af(x) + bf(y) が成立 \\
または \\
f(x + y) = f(x)+f(y) かつ f(ax)  = af(x) が成立
}

その他 - 実数等の集合記号

記号 意味 . 集合
∈ R Real number. 実数
∈ N Natural number. 自然数
∈ Z integer/ integral number 整数
∈ Q rational number. 有理数
∈ C Complex number. 複素数

線形代数を振り返り

pythonによる deep learning のコードを スラスラとは、書けない。

そもそも、線形代数をかなり忘れている気がしますので、 線形代数を振り返ってみます。

線形代数の教科書は、様々な大学がpdfとして公開していますが、 私の場合、何となく、以下の神奈川大学のものを読んでみることにしました。

http://www.math.kanagawa-u.ac.jp/mine/linear_alg/linear_alg_2017_02_28.pdf

機械学習のロードマップ SVG版

筑波大学機械学習講座がyoutubeで公開されていますが、 その中で、紹介されている機械学習のロードマップが分かりやすかったので、 SVG (inkscape) で写経

www.youtube.com

<sodipodi:namedview id="namedview7" pagecolor="#ffffff" bordercolor="#666666" borderopacity="1.0" inkscape:showpageshadow="2" inkscape:pageopacity="0.0" inkscape:pagecheckerboard="0" inkscape:deskcolor="#d1d1d1" inkscape:document-units="mm" showgrid="true" inkscape:zoom="1.0146815" inkscape:cx="-15.768494" inkscape:cy="247.86102" inkscape:window-width="2124" inkscape:window-height="1358" inkscape:window-x="426" inkscape:window-y="3" inkscape:window-maximized="0" inkscape:current-layer="layer1"> <inkscape:grid type="xygrid" id="grid1412" originx="-2.5944506" originy="-20.266805" /> </sodipodi:namedview> 教師あり学習 特徴量 回帰(遷移) 識別 回帰(非線形) 過学習/モデル選択 回帰(非線形) 表現学習 L2正則化(Ridge回帰) L1正則化(Lasso回帰) 決定的識別モデル 確率的識別モデル SVM カーネル ロジスティック 回帰 Softmax 回帰 畳込みNN再帰的NN LSTM 最適化手法 解析的な凸最適化 最急降下法 (勾配法) 確率的最適化 逆誤差伝播法 教師なし学習 主成分 分析 クラスタリング k-means 敵対的 生成ネットワーク

mnistのデータを numpy と pillow for python で pngへ変換

# coding: utf-8

from PIL import Image
import sys, os
import urllib.request
import gzip
import numpy as np


def main():
    mymnist = MyMnist()
    (x_train, t_train, x_test, t_test) = mymnist.load_mnist()

    i = 0
    while i < 10:
        img = Image.fromarray(np.uint8( x_train[i].reshape(28,28) ) )
        img.save("x_train_%d.png" % (i))
        i += 1

class MyMnist:
    def __init__(self):
        pass

    def load_mnist(self,flatten=True):
        data_files = self.download_mnist()
        # convert numpy
        dataset = {}
        dataset['train_img']   = self.load_img(  data_files['train_img'] )
        dataset['train_label'] = self.load_label(data_files['train_label'])
        dataset['test_img']    = self.load_img(  data_files['test_img']  )
        dataset['test_label']  = self.load_label(data_files['test_label'])

        # for key in ('train_img', 'test_img'):
        #     dataset[key] = dataset[key].astype(np.float32)
        #     dataset[key] /= 255.0

        for key in ('train_label','test_label'):
            dataset[key]=self.change_one_hot_label( dataset[key] )

        # 画像を一次元配列(平)にしない場合
        if not flatten:
             for key in ('train_img', 'test_img'):
                dataset[key] = dataset[key].reshape(-1, 1, 28, 28)
                
        return (dataset['train_img'],
                dataset['train_label'],
                dataset['test_img'],
                dataset['test_label'] )

    def change_one_hot_label(self,X):
        T = np.zeros((X.size, 10))
        for idx, row in enumerate(T):
            row[X[idx]] = 1
        return T
    
    def download_mnist(self):
        url_base = 'http://yann.lecun.com/exdb/mnist/'
        key_file = {'train_img'  :'train-images-idx3-ubyte.gz',
                    'train_label':'train-labels-idx1-ubyte.gz',
                    'test_img'   :'t10k-images-idx3-ubyte.gz',
                    'test_label' :'t10k-labels-idx1-ubyte.gz' }
        data_files = {}
        dataset_dir = os.path.dirname(os.path.abspath(__file__))
        
        for data_name, file_name in key_file.items():
            req_url   = url_base+file_name
            file_path = dataset_dir + "/" + file_name

            request  = urllib.request.Request( req_url )
            response = urllib.request.urlopen(request).read()
            with open(file_path, mode='wb') as f:
                f.write(response)
                
            data_files[data_name] = file_path
        return data_files

    def load_img( self,file_path):
        img_size    = 784 # = 28*28
        
        with gzip.open(file_path, 'rb') as f:
            data = np.frombuffer(f.read(), np.uint8, offset=16)
        data = data.reshape(-1, img_size)
        return data
    
    def load_label(self,file_path):
        with gzip.open(file_path, 'rb') as f:
            labels = np.frombuffer(f.read(), np.uint8, offset=8)
        return labels

if __name__ == '__main__':
    main()

↑こう書くと、↓こう変換できます

DeepFloorPlan for python3 & tensorflow2

Re: Python3.6.10でDeepFloorPlanを動かしてみた - end0tknr's kipple - web写経開発

以前のentryで、一旦、諦めかけましたが

既に存在し、Google Colaboratory や data も公開されているので、 そのまま試すことができました。

https://github.com/zcemycl/TF2DeepFloorplan

https://colab.research.google.com/github/zcemycl/TF2DeepFloorplan/blob/master/deepfloorplan.ipynb

https://drive.google.com/uc?id=1czUSFvk6Z49H-zRikTc67g2HUUz4imON

参考url

zlzeng / DeepFloorplan

https://github.com/zlzeng/DeepFloorplan

TF2DeepFloorplan のベースです。

上記urlにある「R2V」とは、Raster-to-Vector の略かと思います。

また「R3D」とは、Rent3Dの略かと思います。

Rent3D: Floor-Plan Priors for Monocular Layout Estimation

http://www.cs.toronto.edu/~fidler/projects/rent3D.html

上記の「DeepFloorplan」からリンクされています。

art-programmer / FloorplanTransformation

https://github.com/art-programmer/FloorplanTransformation

ここでは、訓練済data @ google drive (※1)や、 LIFULLの間取り画像から、壁やドアの位置/座標を示したtext(※2)が提供されています。

3dlg-hcvc / plan2scene

https://github.com/3dlg-hcvc/plan2scene

Google Colab のデモも用意されています。

https://colab.research.google.com/drive/1lDkbfIV0drR1o9D0WYzoWeRskB91nXHq?usp=sharing

https://www.cs.sfu.ca/~furukawa/

numpy for python による ディープラーニング

deep-learning-from-scratch/train_deepnet.py at master · oreilly-japan/deep-learning-from-scratch · GitHub

「ゼロから作るDeep Learning ① (Pythonで学ぶディープラーニングの理論と実装)」 の 8章 sample codeの写経です。

python scriptは以下の通りですが、cpuのみでは、実行完了までにかなりの時間を要するようです。

numpy→cupyへの置き換えにより、gpuでの実行も考えましたが、 cupyの事例は、それ程、多くないようでしたので、 今回は、srcを書くところまでです。

# coding: utf-8
import sys, os
import numpy as np
#import cupy as np
import matplotlib.pyplot as plt
import urllib.request
import gzip
import pickle
from collections import OrderedDict

def main():
    # data読み込み (これまでの例と異なり、flatten=False )
    mymnist = MyMnist()
    (x_train, t_train, x_test, t_test) = mymnist.load_mnist(flatten=False)

    network = DeepConvNet()
    
    # 訓練(学習)
    trainer = Trainer(network, x_train, t_train, x_test, t_test,
                      epochs=20, mini_batch_size=100,
                      optimizer='Adam', optimizer_param={'lr':0.001},
                      evaluate_sample_num_per_epoch=1000)
    trainer.train()

    # パラメータの保存
    network.save_params("deep_convnet_params.pkl")
    print("Saved Network Parameters!")
    

# 認識率99%以上の高精度なConvNet
# 構成は以下
#   conv - relu - conv- relu - pool -
#   conv - relu - conv- relu - pool -
#   conv - relu - conv- relu - pool -
#   affine - relu - dropout - affine - dropout - softmax
class DeepConvNet:
    def __init__(
            self,
            input_dim=(1, 28, 28), # チャンネル、高さ、幅
            conv_param_1 = {'filter_num':16,'filter_size':3,'pad':1,'stride':1},
            conv_param_2 = {'filter_num':16,'filter_size':3,'pad':1,'stride':1},
            conv_param_3 = {'filter_num':32,'filter_size':3,'pad':1,'stride':1},
            conv_param_4 = {'filter_num':32,'filter_size':3,'pad':2,'stride':1},
            conv_param_5 = {'filter_num':64,'filter_size':3,'pad':1,'stride':1},
            conv_param_6 = {'filter_num':64,'filter_size':3,'pad':1,'stride':1},
            hidden_size=50, output_size=10):
        
        # 重みの初期化
        # 各層のニューロンひとつあたりが、
        # 前層のニューロンといくつのつながりがあるか(TODO:自動で計算する)
        pre_node_nums = np.array([
            1*3*3, 16*3*3, 16*3*3, 32*3*3, 32*3*3, 64*3*3, 64*4*4, hidden_size])
        
        # ReLUの場合の推奨初期値
        weight_init_scales = np.sqrt(2.0 / pre_node_nums)
        
        self.params = {}
        pre_channel_num = input_dim[0]
        for idx, conv_param in enumerate([conv_param_1,
                                          conv_param_2,
                                          conv_param_3,
                                          conv_param_4,
                                          conv_param_5,
                                          conv_param_6]):
            self.params['W' + str(idx+1)] = \
                weight_init_scales[idx] * np.random.randn(
                    conv_param['filter_num'],
                    pre_channel_num,
                    conv_param['filter_size'],
                    conv_param['filter_size'] )
            self.params['b' + str(idx+1)] = np.zeros(conv_param['filter_num'])
            pre_channel_num = conv_param['filter_num']
            
        self.params['W7'] = weight_init_scales[6] * \
            np.random.randn( 64*4*4, hidden_size )
        self.params['b7'] = np.zeros(hidden_size)
        self.params['W8'] = weight_init_scales[7] * \
            np.random.randn(hidden_size, output_size)
        self.params['b8'] = np.zeros(output_size)

        # layer生成
        self.layers = []
        self.layers.append(Convolution(self.params['W1'], self.params['b1'],
                           conv_param_1['stride'], conv_param_1['pad']))
        self.layers.append(Relu())
        self.layers.append(Convolution(self.params['W2'], self.params['b2'],
                           conv_param_2['stride'], conv_param_2['pad']))
        self.layers.append(Relu())
        self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
        self.layers.append(Convolution(self.params['W3'], self.params['b3'], 
                           conv_param_3['stride'], conv_param_3['pad']))
        self.layers.append(Relu())
        self.layers.append(Convolution(self.params['W4'], self.params['b4'],
                           conv_param_4['stride'], conv_param_4['pad']))
        self.layers.append(Relu())
        self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
        self.layers.append(Convolution(self.params['W5'], self.params['b5'],
                           conv_param_5['stride'], conv_param_5['pad']))
        self.layers.append(Relu())
        self.layers.append(Convolution(self.params['W6'], self.params['b6'],
                           conv_param_6['stride'], conv_param_6['pad']))
        self.layers.append(Relu())
        self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
        self.layers.append(Affine(self.params['W7'], self.params['b7']))
        self.layers.append(Relu())
        self.layers.append(Dropout(0.5))
        self.layers.append(Affine(self.params['W8'], self.params['b8']))
        self.layers.append(Dropout(0.5))
        
        self.last_layer = SoftmaxWithLoss()

    def predict(self, x, train_flg=False):
        for layer in self.layers:
            if isinstance(layer, Dropout):
                x = layer.forward(x, train_flg)
            else:
                x = layer.forward(x)
        return x

    def loss(self, x, t):
        y = self.predict(x, train_flg=True)
        return self.last_layer.forward(y, t)

    def accuracy(self, x, t, batch_size=100):
        if t.ndim != 1 : t = np.argmax(t, axis=1)

        acc = 0.0

        for i in range(int(x.shape[0] / batch_size)):
            tx = x[i*batch_size:(i+1)*batch_size]
            tt = t[i*batch_size:(i+1)*batch_size]
            y = self.predict(tx, train_flg=False)
            y = np.argmax(y, axis=1)
            acc += np.sum(y == tt)

        return acc / x.shape[0]

    def gradient(self, x, t):
        # forward
        self.loss(x, t)

        # backward
        dout = 1
        dout = self.last_layer.backward(dout)

        tmp_layers = self.layers.copy()
        tmp_layers.reverse()
        for layer in tmp_layers:
            dout = layer.backward(dout)

        # 設定
        grads = {}
        for i, layer_idx in enumerate((0, 2, 5, 7, 10, 12, 15, 18)):
            grads['W' + str(i+1)] = self.layers[layer_idx].dW
            grads['b' + str(i+1)] = self.layers[layer_idx].db

        return grads

    def save_params(self, file_name="params.pkl"):
        params = {}
        for key, val in self.params.items():
            params[key] = val
        with open(file_name, 'wb') as f:
            pickle.dump(params, f)

    def load_params(self, file_name="params.pkl"):
        with open(file_name, 'rb') as f:
            params = pickle.load(f)
        for key, val in params.items():
            self.params[key] = val

        for i, layer_idx in enumerate((0, 2, 5, 7, 10, 12, 15, 18)):
            self.layers[layer_idx].W = self.params['W' + str(i+1)]
            self.layers[layer_idx].b = self.params['b' + str(i+1)]


class MyMnist:
    def __init__(self):
        pass

    def load_mnist(self,flatten=True):
        data_files = self.download_mnist()
        # convert numpy
        dataset = {}
        dataset['train_img']   = self.load_img(  data_files['train_img'] )
        dataset['train_label'] = self.load_label(data_files['train_label'])
        dataset['test_img']    = self.load_img(  data_files['test_img']  )
        dataset['test_label']  = self.load_label(data_files['test_label'])

        for key in ('train_img', 'test_img'):
            dataset[key] = dataset[key].astype(np.float32)
            dataset[key] /= 255.0

        for key in ('train_label','test_label'):
            dataset[key]=self.change_one_hot_label( dataset[key] )

        # 画像を一次元配列(平)にしない場合
        if not flatten:
             for key in ('train_img', 'test_img'):
                dataset[key] = dataset[key].reshape(-1, 1, 28, 28)
                
        return (dataset['train_img'],
                dataset['train_label'],
                dataset['test_img'],
                dataset['test_label'] )

    def change_one_hot_label(self,X):
        T = np.zeros((X.size, 10))
        for idx, row in enumerate(T):
            row[X[idx]] = 1
        return T
    
    def download_mnist(self):
        url_base = 'http://yann.lecun.com/exdb/mnist/'
        key_file = {'train_img'  :'train-images-idx3-ubyte.gz',
                    'train_label':'train-labels-idx1-ubyte.gz',
                    'test_img'   :'t10k-images-idx3-ubyte.gz',
                    'test_label' :'t10k-labels-idx1-ubyte.gz' }
        data_files = {}
        dataset_dir = os.path.dirname(os.path.abspath(__file__))
        
        for data_name, file_name in key_file.items():
            req_url   = url_base+file_name
            file_path = dataset_dir + "/" + file_name

            request  = urllib.request.Request( req_url )
            response = urllib.request.urlopen(request).read()
            with open(file_path, mode='wb') as f:
                f.write(response)
                
            data_files[data_name] = file_path
        return data_files

    def load_img( self,file_path):
        img_size    = 784 # = 28*28
        
        with gzip.open(file_path, 'rb') as f:
            data = np.frombuffer(f.read(), np.uint8, offset=16)
        data = data.reshape(-1, img_size)
        return data
    
    def load_label(self,file_path):
        with gzip.open(file_path, 'rb') as f:
            labels = np.frombuffer(f.read(), np.uint8, offset=8)
        return labels

class Trainer:
    def __init__(self, network, x_train, t_train, x_test, t_test,
                 epochs=20, mini_batch_size=100,
                 optimizer='SGD', optimizer_param={'lr':0.01}, 
                 evaluate_sample_num_per_epoch=None, verbose=True):
        self.network = network
        self.verbose = verbose
        self.x_train = x_train
        self.t_train = t_train
        self.x_test = x_test
        self.t_test = t_test
        self.epochs = epochs
        self.batch_size = mini_batch_size
        self.evaluate_sample_num_per_epoch = evaluate_sample_num_per_epoch

        # optimizer
        optimizer_class_dict = {
            'sgd':SGD,
            'momentum':Momentum,
            'nesterov':Nesterov,
            'adagrad':AdaGrad,
            'rmsprop':RMSprop,
            'adam':Adam}
        self.optimizer = \
            optimizer_class_dict[optimizer.lower()](**optimizer_param)
        
        self.train_size = x_train.shape[0]
        self.iter_per_epoch = max(self.train_size / mini_batch_size, 1)
        self.max_iter = int(epochs * self.iter_per_epoch)
        self.current_iter = 0
        self.current_epoch = 0
        
        self.train_loss_list = []
        self.train_acc_list = []
        self.test_acc_list = []

    def train_step(self):
        batch_mask = np.random.choice(self.train_size, self.batch_size)
        x_batch = self.x_train[batch_mask]
        t_batch = self.t_train[batch_mask]
        
        grads = self.network.gradient(x_batch, t_batch)
        self.optimizer.update(self.network.params, grads)
        
        loss = self.network.loss(x_batch, t_batch)
        self.train_loss_list.append(loss)
        
        # if self.verbose:
        #     print("train loss:", str(loss))
        
        if self.current_iter % self.iter_per_epoch == 0:
            self.current_epoch += 1
            
            x_train_sample, t_train_sample = self.x_train, self.t_train
            x_test_sample, t_test_sample = self.x_test, self.t_test
            if not self.evaluate_sample_num_per_epoch is None:
                t = self.evaluate_sample_num_per_epoch
                x_train_sample = self.x_train[:t]
                t_train_sample = self.t_train[:t]
                x_test_sample  = self.x_test[:t]
                t_test_sample  = self.t_test[:t]
                
            train_acc = self.network.accuracy(x_train_sample, t_train_sample)
            test_acc = self.network.accuracy(x_test_sample, t_test_sample)
            self.train_acc_list.append(train_acc)
            self.test_acc_list.append(test_acc)

            if self.verbose:
                print("epoch:",str(self.current_epoch),
                      "train acc:",str(train_acc),
                      "test acc:", str(test_acc) )
        self.current_iter += 1

    def train(self):
        for i in range(self.max_iter):
            self.train_step()

        test_acc = self.network.accuracy(self.x_test, self.t_test)

        if self.verbose:
            print("=============== Final Test Accuracy")
            print("test acc:" + str(test_acc))
            

# 確率的勾配降下法(Stochastic Gradient Descent)
class SGD:
    def __init__(self, lr=0.01):
        self.lr = lr
        
    def update(self, params, grads):
        for key in params.keys():
            params[key] -= self.lr * grads[key] 

class Momentum:
    def __init__(self, lr=0.01, momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None
        
    def update(self, params, grads):
        if self.v is None:
            self.v = {}
            for key, val in params.items():
                self.v[key] = np.zeros_like(val)
                
        for key in params.keys():
            self.v[key] = self.momentum*self.v[key] - self.lr*grads[key] 
            params[key] += self.v[key]

# http://arxiv.org/abs/1212.0901
class Nesterov:
    def __init__(self, lr=0.01, momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None
        
    def update(self, params, grads):
        if self.v is None:
            self.v = {}
            for key, val in params.items():
                self.v[key] = np.zeros_like(val)
            
        for key in params.keys():
            params[key] += self.momentum * self.momentum * self.v[key]
            params[key] -= (1 + self.momentum) * self.lr * grads[key]
            self.v[key] *= self.momentum
            self.v[key] -= self.lr * grads[key]

class AdaGrad:
    def __init__(self, lr=0.01):
        self.lr = lr
        self.h = None
        
    def update(self, params, grads):
        if self.h is None:
            self.h = {}
            for key, val in params.items():
                self.h[key] = np.zeros_like(val)
            
        for key in params.keys():
            self.h[key] += grads[key] * grads[key]
            params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7)

class RMSprop:
    def __init__(self, lr=0.01, decay_rate = 0.99):
        self.lr = lr
        self.decay_rate = decay_rate
        self.h = None
        
    def update(self, params, grads):
        if self.h is None:
            self.h = {}
            for key, val in params.items():
                self.h[key] = np.zeros_like(val)
            
        for key in params.keys():
            self.h[key] *= self.decay_rate
            self.h[key] += (1 - self.decay_rate) * grads[key] * grads[key]
            params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7)

# http://arxiv.org/abs/1412.6980v8
class Adam:
    def __init__(self, lr=0.001, beta1=0.9, beta2=0.999):
        self.lr = lr
        self.beta1 = beta1
        self.beta2 = beta2
        self.iter = 0
        self.m = None
        self.v = None
        
    def update(self, params, grads):
        if self.m is None:
            self.m, self.v = {}, {}
            for key, val in params.items():
                self.m[key] = np.zeros_like(val)
                self.v[key] = np.zeros_like(val)
        
        self.iter += 1
        lr_t  = \
            self.lr * np.sqrt(1.0 - self.beta2**self.iter) / \
            (1.0 - self.beta1**self.iter)
        
        for key in params.keys():
            #self.m[key] = self.beta1*self.m[key] + (1-self.beta1)*grads[key]
            #self.v[key] = self.beta2*self.v[key] + (1-self.beta2)*(grads[key]**2)
            self.m[key] += (1 - self.beta1) * (grads[key] - self.m[key])
            self.v[key] += (1 - self.beta2) * (grads[key]**2 - self.v[key])
            
            params[key] -= lr_t * self.m[key] / (np.sqrt(self.v[key]) + 1e-7)

            # correct bias
            #unbias_m += (1 - self.beta1) * (grads[key] - self.m[key])

            # correct bias
            #unbisa_b += (1 - self.beta2) * (grads[key]*grads[key] - self.v[key])
            #params[key] += self.lr * unbias_m / (np.sqrt(unbisa_b) + 1e-7)


class Relu:
    def __init__(self):
        self.mask = None

    def forward(self, x):
        self.mask = (x <= 0)
        out = x.copy()
        out[self.mask] = 0
        return out

    def backward(self, dout):
        dout[self.mask] = 0
        dx = dout
        return dx

class Sigmoid:
    def __init__(self):
        self.out = None

    def forward(self, x):
        out = sigmoid(x)
        self.out = out
        return out

    def backward(self, dout):
        dx = dout * (1.0 - self.out) * self.out
        return dx

class Affine:
    def __init__(self, W, b):
        self.W =W
        self.b = b
        
        self.x = None
        self.original_x_shape = None
        # 重み・バイアスパラメータの微分
        self.dW = None
        self.db = None

    def forward(self, x):
        # テンソル対応
        self.original_x_shape = x.shape
        x = x.reshape(x.shape[0], -1)
        self.x = x
        out = np.dot(self.x, self.W) + self.b
        return out

    def backward(self, dout):
        dx = np.dot(dout, self.W.T)
        self.dW = np.dot(self.x.T, dout)
        self.db = np.sum(dout, axis=0)

        # 入力dataの形状に戻す(tensor対応)
        dx = dx.reshape(*self.original_x_shape)  
        return dx

class SoftmaxWithLoss:
    def __init__(self):
        self.loss = None
        self.y = None # softmaxの出力
        self.t = None # 教師データ

    def forward(self, x, t):
        self.t = t
        self.y = self.softmax(x)
        self.loss = self.cross_entropy_error(self.y, self.t)
        return self.loss

    def softmax(self, x):
        x = x - np.max(x, axis=-1, keepdims=True)   # オーバーフロー対策
        return np.exp(x) / np.sum(np.exp(x), axis=-1, keepdims=True)

    def cross_entropy_error(self, y, t):
        if y.ndim == 1:
            t = t.reshape(1, t.size)
            y = y.reshape(1, y.size)

        # 教師dataがone-hot-vectorの場合、正解ラベルのindexに変換
        if t.size == y.size:
            t = t.argmax(axis=1)

        batch_size = y.shape[0]
        return -np.sum(np.log(y[np.arange(batch_size), t] + 1e-7)) / batch_size

    def backward(self, dout=1):
        batch_size = self.t.shape[0]
        if self.t.size == self.y.size: # 教師データがone-hot-vectorの場合
            dx = (self.y - self.t) / batch_size
        else:
            dx = self.y.copy()
            dx[np.arange(batch_size), self.t] -= 1
            dx = dx / batch_size
        return dx

# http://arxiv.org/abs/1207.0580
class Dropout:
    def __init__(self, dropout_ratio=0.5):
        self.dropout_ratio = dropout_ratio
        self.mask = None

    def forward(self, x, train_flg=True):
        if train_flg:
            self.mask = np.random.rand(*x.shape) > self.dropout_ratio
            return x * self.mask
        else:
            return x * (1.0 - self.dropout_ratio)

    def backward(self, dout):
        return dout * self.mask

# http://arxiv.org/abs/1502.03167
class BatchNormalization:
    def __init__(self,
                 gamma,
                 beta,
                 momentum=0.9,
                 running_mean=None,
                 running_var=None):
        self.gamma = gamma
        self.beta = beta
        self.momentum = momentum
        # Conv層の場合は4次元、全結合層の場合は2次元
        self.input_shape = None 

        # テスト時に使用する平均と分散
        self.running_mean = running_mean
        self.running_var = running_var
        
        # backward時に使用する中間データ
        self.batch_size = None
        self.xc = None
        self.std = None
        self.dgamma = None
        self.dbeta = None

    def forward(self, x, train_flg=True):
        self.input_shape = x.shape
        if x.ndim != 2:
            N, C, H, W = x.shape
            x = x.reshape(N, -1)
        out = self.__forward(x, train_flg)
        return out.reshape(*self.input_shape)
            
    def __forward(self, x, train_flg):
        if self.running_mean is None:
            N, D = x.shape
            self.running_mean = np.zeros(D)
            self.running_var = np.zeros(D)
                        
        if train_flg:
            mu = x.mean(axis=0)
            xc = x - mu
            var = np.mean(xc**2, axis=0)
            std = np.sqrt(var + 10e-7)
            xn = xc / std
            
            self.batch_size = x.shape[0]
            self.xc = xc
            self.xn = xn
            self.std = std
            self.running_mean = \
                self.momentum * self.running_mean + (1-self.momentum) * mu
            self.running_var = \
                self.momentum * self.running_var + (1-self.momentum) * var
        else:
            xc = x - self.running_mean
            xn = xc / ((np.sqrt(self.running_var + 10e-7)))
            
        out = self.gamma * xn + self.beta 
        return out

    def backward(self, dout):
        if dout.ndim != 2:
            N, C, H, W = dout.shape
            dout = dout.reshape(N, -1)

        dx = self.__backward(dout)

        dx = dx.reshape(*self.input_shape)
        return dx

    def __backward(self, dout):
        dbeta = dout.sum(axis=0)
        dgamma = np.sum(self.xn * dout, axis=0)
        dxn = self.gamma * dout
        dxc = dxn / self.std
        dstd = -np.sum((dxn * self.xc) / (self.std * self.std), axis=0)
        dvar = 0.5 * dstd / self.std
        dxc += (2.0 / self.batch_size) * self.xc * dvar
        dmu = np.sum(dxc, axis=0)
        dx = dxc - dmu / self.batch_size
        
        self.dgamma = dgamma
        self.dbeta = dbeta
        return dx

class Convolution:
    def __init__(self, W, b, stride=1, pad=0):
        self.W = W
        self.b = b
        self.stride = stride
        self.pad = pad
        # 中間データ(backward時に使用)
        self.x = None   
        self.col = None
        self.col_W = None
        # 重み・バイアスパラメータの勾配
        self.dW = None
        self.db = None

    def forward(self, x):
        FN, C, FH, FW = self.W.shape
        N, C, H, W = x.shape
        out_h = 1 + int((H + 2*self.pad - FH) / self.stride)
        out_w = 1 + int((W + 2*self.pad - FW) / self.stride)

        col = im2col(x, FH, FW, self.stride, self.pad)
        col_W = self.W.reshape(FN, -1).T

        out = np.dot(col, col_W) + self.b
        out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)

        self.x = x
        self.col = col
        self.col_W = col_W
        return out

    def backward(self, dout):
        FN, C, FH, FW = self.W.shape
        dout = dout.transpose(0,2,3,1).reshape(-1, FN)

        self.db = np.sum(dout, axis=0)
        self.dW = np.dot(self.col.T, dout)
        self.dW = self.dW.transpose(1, 0).reshape(FN, C, FH, FW)

        dcol = np.dot(dout, self.col_W.T)
        dx = col2im(dcol, self.x.shape, FH, FW, self.stride, self.pad)
        return dx

class Pooling:
    def __init__(self, pool_h, pool_w, stride=2, pad=0):
        self.pool_h = pool_h
        self.pool_w = pool_w
        self.stride = stride
        self.pad = pad
        
        self.x = None
        self.arg_max = None

    def forward(self, x):
        N, C, H, W = x.shape
        out_h = int(1 + (H - self.pool_h) / self.stride)
        out_w = int(1 + (W - self.pool_w) / self.stride)

        col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad)
        col = col.reshape(-1, self.pool_h*self.pool_w)

        arg_max = np.argmax(col, axis=1)
        out = np.max(col, axis=1)
        out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2)

        self.x = x
        self.arg_max = arg_max

        return out

    def backward(self, dout):
        dout = dout.transpose(0, 2, 3, 1)
        
        pool_size = self.pool_h * self.pool_w
        dmax = np.zeros((dout.size, pool_size))
        dmax[np.arange(self.arg_max.size), self.arg_max.flatten()] = \
            dout.flatten()
        dmax = dmax.reshape(dout.shape + (pool_size,)) 
        
        dcol = dmax.reshape(
            dmax.shape[0] * dmax.shape[1] * dmax.shape[2], -1)
        dx = col2im(dcol,
                    self.x.shape,
                    self.pool_h,
                    self.pool_w,
                    self.stride,
                    self.pad)
        return dx


# input_data:(data数,チャンネル, 高さ, 幅)
def im2col(input_data, filter_h, filter_w, stride=1, pad=0):
    N, C, H, W = input_data.shape
    # //は、切捨て整数除算
    out_h = (H + 2*pad - filter_h)//stride + 1
    out_w = (W + 2*pad - filter_w)//stride + 1

    img = np.pad(input_data,
                 [(0,0), (0,0), (pad, pad), (pad, pad)],
                 'constant')
    col = np.zeros((N, C, filter_h, filter_w, out_h, out_w))

    for y in range(filter_h):
        y_max = y + stride*out_h
        for x in range(filter_w):
            x_max = x + stride*out_w
            col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]

    col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N*out_h*out_w, -1)
    return col

# input_shape: 入力data形状 例:(10,1,28,28)
def col2im(col, input_shape, filter_h, filter_w, stride=1, pad=0):
    N, C, H, W = input_shape
    out_h = (H + 2*pad - filter_h)//stride + 1
    out_w = (W + 2*pad - filter_w)//stride + 1
    col = col.reshape(N,out_h,out_w,C,filter_h,filter_w).transpose(0,3,4,5,1,2)

    img = np.zeros((N, C, H + 2*pad + stride - 1, W + 2*pad + stride - 1))
    for y in range(filter_h):
        y_max = y + stride*out_h
        for x in range(filter_w):
            x_max = x + stride*out_w
            img[:, :, y:y_max:stride, x:x_max:stride] += col[:, :, y, x, :, :]

    return img[:, :, pad:H + pad, pad:W + pad]

if __name__ == '__main__':
    main()

numpy for python による CNN (Convolutional Neural Network)

「ゼロから作るDeep Learning ① (Pythonで学ぶディープラーニングの理論と実装)」 p.229~233 の写経です。

ここまでで理解が深まった気がしますが、 様々、ありすぎて、理解できていない部分もあると思います。

# coding: utf-8
import sys, os
import numpy as np
import matplotlib.pyplot as plt
import urllib.request
import gzip
import pickle
from collections import OrderedDict

def main():
    # data読み込み (これまでの例と異なり、flatten=False )
    mymnist = MyMnist()
    (x_train, t_train, x_test, t_test) = mymnist.load_mnist(flatten=False)

    # 時間のかかる場合、はデータを削減 
    x_train, t_train = x_train[:5000],t_train[:5000]
    x_test, t_test   = x_test[:1000], t_test[:1000]

    max_epochs = 20

    # 訓練/学習
    network = SimpleConvNet(input_dim=(1,28,28), 
                            conv_param = {'filter_num': 30,
                                          'filter_size': 5,
                                          'pad': 0,
                                          'stride': 1},
                            hidden_size=100,
                            output_size=10,
                            weight_init_std=0.01)

    trainer = Trainer(network, x_train, t_train, x_test, t_test,
                      epochs=max_epochs, mini_batch_size=100,
                      optimizer='Adam', optimizer_param={'lr': 0.001},
                      evaluate_sample_num_per_epoch=1000)
    trainer.train()

    # パラメータの保存
    network.save_params("params.pkl")
    print("Saved Network Parameters!")

    # グラフの描画
    markers = {'train': 'o', 'test': 's'}
    x = np.arange(max_epochs)
    plt.plot(x, trainer.train_acc_list, marker='o', label='train', markevery=2)
    plt.plot(x, trainer.test_acc_list, marker='s', label='test', markevery=2)
    plt.xlabel("epochs")
    plt.ylabel("accuracy")
    plt.ylim(0, 1.0)
    plt.legend(loc='lower right')
    plt.show()

class MyMnist:
    def __init__(self):
        pass

    def load_mnist(self,flatten=True):
        data_files = self.download_mnist()
        # convert numpy
        dataset = {}
        dataset['train_img']   = self.load_img(  data_files['train_img'] )
        dataset['train_label'] = self.load_label(data_files['train_label'])
        dataset['test_img']    = self.load_img(  data_files['test_img']  )
        dataset['test_label']  = self.load_label(data_files['test_label'])

        for key in ('train_img', 'test_img'):
            dataset[key] = dataset[key].astype(np.float32)
            dataset[key] /= 255.0

        for key in ('train_label','test_label'):
            dataset[key]=self.change_one_hot_label( dataset[key] )

        # 画像を一次元配列(平)にしない場合
        if not flatten:
             for key in ('train_img', 'test_img'):
                dataset[key] = dataset[key].reshape(-1, 1, 28, 28)
                
        return (dataset['train_img'],
                dataset['train_label'],
                dataset['test_img'],
                dataset['test_label'] )

    def change_one_hot_label(self,X):
        T = np.zeros((X.size, 10))
        for idx, row in enumerate(T):
            row[X[idx]] = 1
        return T
    
    def download_mnist(self):
        url_base = 'http://yann.lecun.com/exdb/mnist/'
        key_file = {'train_img'  :'train-images-idx3-ubyte.gz',
                    'train_label':'train-labels-idx1-ubyte.gz',
                    'test_img'   :'t10k-images-idx3-ubyte.gz',
                    'test_label' :'t10k-labels-idx1-ubyte.gz' }
        data_files = {}
        dataset_dir = os.path.dirname(os.path.abspath(__file__))
        
        for data_name, file_name in key_file.items():
            req_url   = url_base+file_name
            file_path = dataset_dir + "/" + file_name

            request  = urllib.request.Request( req_url )
            response = urllib.request.urlopen(request).read()
            with open(file_path, mode='wb') as f:
                f.write(response)
                
            data_files[data_name] = file_path
        return data_files

    def load_img( self,file_path):
        img_size    = 784 # = 28*28
        
        with gzip.open(file_path, 'rb') as f:
            data = np.frombuffer(f.read(), np.uint8, offset=16)
        data = data.reshape(-1, img_size)
        return data
    
    def load_label(self,file_path):
        with gzip.open(file_path, 'rb') as f:
            labels = np.frombuffer(f.read(), np.uint8, offset=8)
        return labels

    
# conv - relu - pool - affine - relu - affine - softmax
class SimpleConvNet:

    def __init__(self,
                 input_dim=(1, 28, 28), # チャンネル、高さ、幅
                 conv_param={
                     'filter_num':30,
                     'filter_size':5,
                     'pad':0,               #CNNのpadding
                     'stride':1},           #CNNのstride
                 hidden_size=100,           #隠れ層のneuron数
                 output_size=10,            #出力層の〃
                 weight_init_std=0.01):     #初期化時の重みの標準偏差 ※
        # ※ relu or he の場合、「Heの初期値」
        #    sigmoid or xavierの場合、「Xavierの初期値」

        filter_num    = conv_param['filter_num']
        filter_size   = conv_param['filter_size']
        filter_pad    = conv_param['pad']
        filter_stride = conv_param['stride']
        input_size = input_dim[1]
        conv_output_size = \
            (input_size - filter_size + 2*filter_pad) / filter_stride + 1
        pool_output_size = \
            int(filter_num * (conv_output_size/2) * (conv_output_size/2))

        # 重みの初期化
        self.params = {}
        self.params['W1'] = weight_init_std * \
                            np.random.randn(filter_num,
                                            input_dim[0],
                                            filter_size,
                                            filter_size)
        self.params['b1'] = np.zeros(filter_num)
        self.params['W2'] = weight_init_std * \
                            np.random.randn(pool_output_size, hidden_size)
        self.params['b2'] = np.zeros(hidden_size)
        self.params['W3'] = weight_init_std * \
                            np.random.randn(hidden_size, output_size)
        self.params['b3'] = np.zeros(output_size)

        # レイヤの生成
        self.layers = OrderedDict()
        self.layers['Conv1'] = Convolution(
            self.params['W1'],
            self.params['b1'],
            conv_param['stride'],
            conv_param['pad'])

        self.layers['Relu1'] = Relu()
        self.layers['Pool1'] = Pooling(pool_h=2, pool_w=2, stride=2)
        self.layers['Affine1'] = Affine(self.params['W2'], self.params['b2'])
        self.layers['Relu2'] = Relu()
        self.layers['Affine2'] = Affine(self.params['W3'], self.params['b3'])

        self.last_layer = SoftmaxWithLoss()

    def predict(self, x):
        for layer in self.layers.values():
            x = layer.forward(x)

        return x

    def loss(self, x, t):
        """損失関数を求める
        引数のxは入力データ、tは教師ラベル
        """
        y = self.predict(x)
        return self.last_layer.forward(y, t)

    def accuracy(self, x, t, batch_size=100):
        if t.ndim != 1 : t = np.argmax(t, axis=1)

        acc = 0.0

        for i in range(int(x.shape[0] / batch_size)):
            tx = x[i*batch_size:(i+1)*batch_size]
            tt = t[i*batch_size:(i+1)*batch_size]
            y = self.predict(tx)
            y = np.argmax(y, axis=1)
            acc += np.sum(y == tt) 

        return acc / x.shape[0]

    # 勾配(数値微分). x:入力data、t:教師label
    def numerical_gradient(self, x, t):
        loss_w = lambda w: self.loss(x, t)

        grads = {}
        for idx in (1, 2, 3):
            grads['W' + str(idx)] = \
                self._numerical_gradient(loss_w, self.params['W' + str(idx)])
            grads['b' + str(idx)] = \
                self._numerical_gradient(loss_w, self.params['b' + str(idx)])

        return grads

    def _numerical_gradient(self, f, x):
        h = 1e-4 # 0.0001
        grad = np.zeros_like(x)

        it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
        while not it.finished:
            idx = it.multi_index
            tmp_val = x[idx]
            x[idx] = tmp_val + h
            fxh1 = f(x) # f(x+h)

            x[idx] = tmp_val - h 
            fxh2 = f(x) # f(x-h)
            grad[idx] = (fxh1 - fxh2) / (2*h)

            x[idx] = tmp_val # 値を元に戻す
            it.iternext()   

        return grad

    # 勾配 (誤差逆伝搬法). x:入力data、t:教師label
    def gradient(self, x, t):
        # forward
        self.loss(x, t)

        # backward
        dout = 1
        dout = self.last_layer.backward(dout)

        layers = list(self.layers.values())
        layers.reverse()
        for layer in layers:
            dout = layer.backward(dout)

        # 設定
        grads = {}
        grads['W1'] = self.layers['Conv1'].dW
        grads['b1'] = self.layers['Conv1'].db
        grads['W2'] = self.layers['Affine1'].dW
        grads['b2'] = self.layers['Affine1'].db
        grads['W3'] = self.layers['Affine2'].dW
        grads['b3'] = self.layers['Affine2'].db

        return grads

    def save_params(self, file_name="params.pkl"):
        params = {}
        for key, val in self.params.items():
            params[key] = val
        with open(file_name, 'wb') as f:
            pickle.dump(params, f)

    def load_params(self, file_name="params.pkl"):
        with open(file_name, 'rb') as f:
            params = pickle.load(f)
        for key, val in params.items():
            self.params[key] = val

        for i, key in enumerate(['Conv1', 'Affine1', 'Affine2']):
            self.layers[key].W = self.params['W' + str(i+1)]
            self.layers[key].b = self.params['b' + str(i+1)]

            
class Convolution:
    def __init__(self, W, b, stride=1, pad=0):
        self.W = W
        self.b = b
        self.stride = stride
        self.pad = pad
        
        # 中間data (backwardに使用)
        self.x = None   
        self.col = None
        self.col_W = None
        
        # 重み・バイアスパラメータの勾配
        self.dW = None
        self.db = None

    def forward(self, x):
        FN, C, FH, FW = self.W.shape
        N, C, H, W = x.shape
        out_h = 1 + int((H + 2*self.pad - FH) / self.stride)
        out_w = 1 + int((W + 2*self.pad - FW) / self.stride)

        col = im2col(x, FH, FW, self.stride, self.pad)
        col_W = self.W.reshape(FN, -1).T

        out = np.dot(col, col_W) + self.b
        out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)

        self.x = x
        self.col = col
        self.col_W = col_W

        return out

    def backward(self, dout):
        FN, C, FH, FW = self.W.shape
        dout = dout.transpose(0,2,3,1).reshape(-1, FN)

        self.db = np.sum(dout, axis=0)
        self.dW = np.dot(self.col.T, dout)
        self.dW = self.dW.transpose(1, 0).reshape(FN, C, FH, FW)

        dcol = np.dot(dout, self.col_W.T)
        dx = col2im(dcol, self.x.shape, FH, FW, self.stride, self.pad)

        return dx


class Pooling:
    def __init__(self, pool_h, pool_w, stride=2, pad=0):
        self.pool_h = pool_h
        self.pool_w = pool_w
        self.stride = stride
        self.pad = pad
        
        self.x = None
        self.arg_max = None

    def forward(self, x):
        N, C, H, W = x.shape
        out_h = int(1 + (H - self.pool_h) / self.stride)
        out_w = int(1 + (W - self.pool_w) / self.stride)

        col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad)
        col = col.reshape(-1, self.pool_h*self.pool_w)

        arg_max = np.argmax(col, axis=1)
        out = np.max(col, axis=1)
        out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2)

        self.x = x
        self.arg_max = arg_max

        return out

    def backward(self, dout):
        dout = dout.transpose(0, 2, 3, 1)
        
        pool_size = self.pool_h * self.pool_w
        dmax = np.zeros((dout.size, pool_size))
        dmax[np.arange(self.arg_max.size), self.arg_max.flatten()] = dout.flatten()
        dmax = dmax.reshape(dout.shape + (pool_size,)) 
        
        dcol = dmax.reshape(dmax.shape[0] * dmax.shape[1] * dmax.shape[2], -1)
        dx = col2im(dcol,self.x.shape,self.pool_h,self.pool_w,self.stride,self.pad)
        
        return dx

# input_data:(data数,チャンネル, 高さ, 幅)
def im2col(input_data, filter_h, filter_w, stride=1, pad=0):
    N, C, H, W = input_data.shape
    # //は、切捨て整数除算
    out_h = (H + 2*pad - filter_h)//stride + 1
    out_w = (W + 2*pad - filter_w)//stride + 1

    img = np.pad(input_data,
                 [(0,0), (0,0), (pad, pad), (pad, pad)],
                 'constant')
    col = np.zeros((N, C, filter_h, filter_w, out_h, out_w))

    for y in range(filter_h):
        y_max = y + stride*out_h
        for x in range(filter_w):
            x_max = x + stride*out_w
            col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]

    col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N*out_h*out_w, -1)
    return col

# input_shape: 入力data形状 例:(10,1,28,28)
def col2im(col, input_shape, filter_h, filter_w, stride=1, pad=0):
    N, C, H, W = input_shape
    out_h = (H + 2*pad - filter_h)//stride + 1
    out_w = (W + 2*pad - filter_w)//stride + 1
    col = col.reshape(N,out_h,out_w,C,filter_h,filter_w).transpose(0,3,4,5,1,2)

    img = np.zeros((N, C, H + 2*pad + stride - 1, W + 2*pad + stride - 1))
    for y in range(filter_h):
        y_max = y + stride*out_h
        for x in range(filter_w):
            x_max = x + stride*out_w
            img[:, :, y:y_max:stride, x:x_max:stride] += col[:, :, y, x, :, :]

    return img[:, :, pad:H + pad, pad:W + pad]


class Trainer:
    def __init__(self, network, x_train, t_train, x_test, t_test,
                 epochs=20, mini_batch_size=100,
                 optimizer='SGD', optimizer_param={'lr':0.01}, 
                 evaluate_sample_num_per_epoch=None, verbose=True):
        self.network = network
        self.verbose = verbose
        self.x_train = x_train
        self.t_train = t_train
        self.x_test = x_test
        self.t_test = t_test
        self.epochs = epochs
        self.batch_size = mini_batch_size
        self.evaluate_sample_num_per_epoch = evaluate_sample_num_per_epoch

        # optimizer
        optimizer_class_dict = {'sgd':SGD,
                                'momentum':Momentum,
                                'nesterov':Nesterov,
                                'adagrad':AdaGrad,
                                'rmsprop':RMSprop,
                                'adam':Adam}
        self.optimizer = optimizer_class_dict[optimizer.lower()](**optimizer_param)
        
        self.train_size = x_train.shape[0]
        self.iter_per_epoch = max(self.train_size / mini_batch_size, 1)
        self.max_iter = int(epochs * self.iter_per_epoch)
        self.current_iter = 0
        self.current_epoch = 0
        
        self.train_loss_list = []
        self.train_acc_list  = []
        self.test_acc_list   = []

    def train_step(self):
        batch_mask = np.random.choice(self.train_size, self.batch_size)
        x_batch = self.x_train[batch_mask]
        t_batch = self.t_train[batch_mask]
        
        grads = self.network.gradient(x_batch, t_batch)
        self.optimizer.update(self.network.params, grads)
        
        loss = self.network.loss(x_batch, t_batch)
        self.train_loss_list.append(loss)
        #if self.verbose: print("train loss:" + str(loss))
        
        if self.current_iter % self.iter_per_epoch == 0:
            self.current_epoch += 1
            
            x_train_sample, t_train_sample = self.x_train, self.t_train
            x_test_sample, t_test_sample = self.x_test, self.t_test
            if not self.evaluate_sample_num_per_epoch is None:
                t = self.evaluate_sample_num_per_epoch
                x_train_sample, t_train_sample = self.x_train[:t], self.t_train[:t]
                x_test_sample, t_test_sample = self.x_test[:t], self.t_test[:t]
                
            train_acc = self.network.accuracy(x_train_sample, t_train_sample)
            test_acc = self.network.accuracy(x_test_sample, t_test_sample)
            self.train_acc_list.append(train_acc)
            self.test_acc_list.append(test_acc)

            if self.verbose:
                print("epoch:",    str(self.current_epoch),
                      "train acc:",str(train_acc),
                      "test acc:", str(test_acc) )
        self.current_iter += 1

    def train(self):
        for i in range(self.max_iter):
            self.train_step()

        test_acc = self.network.accuracy(self.x_test, self.t_test)

        if self.verbose:
            print("=============== Final Test Accuracy")
            print("test acc:" + str(test_acc))

# 確率的勾配降下法(Stochastic Gradient Descent)
class SGD:
    def __init__(self, lr=0.01):
        self.lr = lr
        
    def update(self, params, grads):
        for key in params.keys():
            params[key] -= self.lr * grads[key] 

class Momentum:
    def __init__(self, lr=0.01, momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None
        
    def update(self, params, grads):
        if self.v is None:
            self.v = {}
            for key, val in params.items():
                self.v[key] = np.zeros_like(val)
                
        for key in params.keys():
            self.v[key] = self.momentum*self.v[key] - self.lr*grads[key] 
            params[key] += self.v[key]

# http://arxiv.org/abs/1212.0901
class Nesterov:
    def __init__(self, lr=0.01, momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None
        
    def update(self, params, grads):
        if self.v is None:
            self.v = {}
            for key, val in params.items():
                self.v[key] = np.zeros_like(val)
            
        for key in params.keys():
            params[key] += self.momentum * self.momentum * self.v[key]
            params[key] -= (1 + self.momentum) * self.lr * grads[key]
            self.v[key] *= self.momentum
            self.v[key] -= self.lr * grads[key]

class AdaGrad:
    def __init__(self, lr=0.01):
        self.lr = lr
        self.h = None
        
    def update(self, params, grads):
        if self.h is None:
            self.h = {}
            for key, val in params.items():
                self.h[key] = np.zeros_like(val)
            
        for key in params.keys():
            self.h[key] += grads[key] * grads[key]
            params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7)

class RMSprop:
    def __init__(self, lr=0.01, decay_rate = 0.99):
        self.lr = lr
        self.decay_rate = decay_rate
        self.h = None
        
    def update(self, params, grads):
        if self.h is None:
            self.h = {}
            for key, val in params.items():
                self.h[key] = np.zeros_like(val)
            
        for key in params.keys():
            self.h[key] *= self.decay_rate
            self.h[key] += (1 - self.decay_rate) * grads[key] * grads[key]
            params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7)

# http://arxiv.org/abs/1412.6980v8
class Adam:
    def __init__(self, lr=0.001, beta1=0.9, beta2=0.999):
        self.lr = lr
        self.beta1 = beta1
        self.beta2 = beta2
        self.iter = 0
        self.m = None
        self.v = None
        
    def update(self, params, grads):
        if self.m is None:
            self.m, self.v = {}, {}
            for key, val in params.items():
                self.m[key] = np.zeros_like(val)
                self.v[key] = np.zeros_like(val)
        
        self.iter += 1
        lr_t  = self.lr * np.sqrt(1.0 - self.beta2**self.iter) / \
            (1.0 - self.beta1**self.iter)
        
        for key in params.keys():
            #self.m[key] = self.beta1*self.m[key] + (1-self.beta1)*grads[key]
            #self.v[key] = self.beta2*self.v[key] + (1-self.beta2)*(grads[key]**2)
            self.m[key] += (1 - self.beta1) * (grads[key] - self.m[key])
            self.v[key] += (1 - self.beta2) * (grads[key]**2 - self.v[key])
            
            params[key] -= lr_t * self.m[key] / (np.sqrt(self.v[key]) + 1e-7)
            
            # correct bias
            #unbias_m += (1-self.beta1) * (grads[key] - self.m[key])
            
            # correct bias
            #unbisa_b += (1-self.beta2) * (grads[key]*grads[key] - self.v[key])
            #params[key] += self.lr * unbias_m / (np.sqrt(unbisa_b) + 1e-7)
            

class Relu:
    def __init__(self):
        self.mask = None

    def forward(self, x):
        self.mask = (x <= 0)
        out = x.copy()
        out[self.mask] = 0
        return out

    def backward(self, dout):
        dout[self.mask] = 0
        dx = dout
        return dx

class Sigmoid:
    def __init__(self):
        self.out = None

    def forward(self, x):
        out = sigmoid(x)
        self.out = out
        return out

    def backward(self, dout):
        dx = dout * (1.0 - self.out) * self.out
        return dx

class Affine:
    def __init__(self, W, b):
        self.W =W
        self.b = b
        
        self.x = None
        self.original_x_shape = None
        # 重み・バイアスパラメータの微分
        self.dW = None
        self.db = None

    def forward(self, x):
        # テンソル対応
        self.original_x_shape = x.shape
        x = x.reshape(x.shape[0], -1)
        self.x = x

        out = np.dot(self.x, self.W) + self.b
        return out

    def backward(self, dout):
        dx = np.dot(dout, self.W.T)
        self.dW = np.dot(self.x.T, dout)
        self.db = np.sum(dout, axis=0)
        
        dx = dx.reshape(*self.original_x_shape)
        return dx

class SoftmaxWithLoss:
    def __init__(self):
        self.loss = None
        self.y = None # softmaxの出力
        self.t = None # 教師データ

    def forward(self, x, t):
        self.t = t
        self.y = self.softmax(x)
        self.loss = self.cross_entropy_error(self.y, self.t)
        return self.loss

    def softmax(self, x):
        x = x - np.max(x, axis=-1, keepdims=True)   # オーバーフロー対策
        return np.exp(x) / np.sum(np.exp(x), axis=-1, keepdims=True)

    def cross_entropy_error(self, y, t):
        if y.ndim == 1:
            t = t.reshape(1, t.size)
            y = y.reshape(1, y.size)

        # 教師dataがone-hot-vectorの場合、正解ラベルのindexに変換
        if t.size == y.size:
            t = t.argmax(axis=1)

        batch_size = y.shape[0]
        return -np.sum(np.log(y[np.arange(batch_size), t] + 1e-7)) / batch_size

    def backward(self, dout=1):
        batch_size = self.t.shape[0]
        if self.t.size == self.y.size: # 教師データがone-hot-vectorの場合
            dx = (self.y - self.t) / batch_size
        else:
            dx = self.y.copy()
            dx[np.arange(batch_size), self.t] -= 1
            dx = dx / batch_size
        return dx

# http://arxiv.org/abs/1207.0580
class Dropout:
    def __init__(self, dropout_ratio=0.5):
        self.dropout_ratio = dropout_ratio
        self.mask = None

    def forward(self, x, train_flg=True):
        if train_flg:
            self.mask = np.random.rand(*x.shape) > self.dropout_ratio
            return x * self.mask
        else:
            return x * (1.0 - self.dropout_ratio)

    def backward(self, dout):
        return dout * self.mask

# http://arxiv.org/abs/1502.03167
class BatchNormalization:
    def __init__(self,gamma,beta,momentum=0.9,running_mean=None,running_var=None):
        self.gamma = gamma
        self.beta = beta
        self.momentum = momentum
        self.input_shape = None # Conv層の場合は4次元、全結合層の場合は2次元

        # テスト時に使用する平均と分散
        self.running_mean = running_mean
        self.running_var = running_var  
        
        # backward時に使用する中間データ
        self.batch_size = None
        self.xc = None
        self.std = None
        self.dgamma = None
        self.dbeta = None

    def forward(self, x, train_flg=True):
        self.input_shape = x.shape
        if x.ndim != 2:
            N, C, H, W = x.shape
            x = x.reshape(N, -1)

        out = self.__forward(x, train_flg)
        
        return out.reshape(*self.input_shape)
            
    def __forward(self, x, train_flg):
        if self.running_mean is None:
            N, D = x.shape
            self.running_mean = np.zeros(D)
            self.running_var = np.zeros(D)
                        
        if train_flg:
            mu = x.mean(axis=0)
            xc = x - mu
            var = np.mean(xc**2, axis=0)
            std = np.sqrt(var + 10e-7)
            xn = xc / std
            
            self.batch_size = x.shape[0]
            self.xc = xc
            self.xn = xn
            self.std = std
            self.running_mean = \
                self.momentum * self.running_mean + (1-self.momentum) * mu
            self.running_var = \
                self.momentum * self.running_var + (1-self.momentum) * var
        else:
            xc = x - self.running_mean
            xn = xc / ((np.sqrt(self.running_var + 10e-7)))
            
        out = self.gamma * xn + self.beta 
        return out

    def backward(self, dout):
        if dout.ndim != 2:
            N, C, H, W = dout.shape
            dout = dout.reshape(N, -1)

        dx = self.__backward(dout)
        dx = dx.reshape(*self.input_shape)
        return dx

    def __backward(self, dout):
        dbeta = dout.sum(axis=0)
        dgamma = np.sum(self.xn * dout, axis=0)
        dxn = self.gamma * dout
        dxc = dxn / self.std
        dstd = -np.sum((dxn * self.xc) / (self.std * self.std), axis=0)
        dvar = 0.5 * dstd / self.std
        dxc += (2.0 / self.batch_size) * self.xc * dvar
        dmu = np.sum(dxc, axis=0)
        dx = dxc - dmu / self.batch_size
        
        self.dgamma = dgamma
        self.dbeta = dbeta
        return dx

class Convolution:
    def __init__(self, W, b, stride=1, pad=0):
        self.W = W
        self.b = b
        self.stride = stride
        self.pad = pad
        
        # 中間データ(backward時に使用)
        self.x = None   
        self.col = None
        self.col_W = None
        
        # 重み・バイアスパラメータの勾配
        self.dW = None
        self.db = None

    def forward(self, x):
        FN, C, FH, FW = self.W.shape
        N, C, H, W = x.shape
        out_h = 1 + int((H + 2*self.pad - FH) / self.stride)
        out_w = 1 + int((W + 2*self.pad - FW) / self.stride)

        col = im2col(x, FH, FW, self.stride, self.pad)
        col_W = self.W.reshape(FN, -1).T

        out = np.dot(col, col_W) + self.b
        out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)

        self.x = x
        self.col = col
        self.col_W = col_W
        return out

    def backward(self, dout):
        FN, C, FH, FW = self.W.shape
        dout = dout.transpose(0,2,3,1).reshape(-1, FN)

        self.db = np.sum(dout, axis=0)
        self.dW = np.dot(self.col.T, dout)
        self.dW = self.dW.transpose(1, 0).reshape(FN, C, FH, FW)

        dcol = np.dot(dout, self.col_W.T)
        dx = col2im(dcol, self.x.shape, FH, FW, self.stride, self.pad)
        return dx

class Pooling:
    def __init__(self, pool_h, pool_w, stride=2, pad=0):
        self.pool_h = pool_h
        self.pool_w = pool_w
        self.stride = stride
        self.pad = pad
        
        self.x = None
        self.arg_max = None

    def forward(self, x):
        N, C, H, W = x.shape
        out_h = int(1 + (H - self.pool_h) / self.stride)
        out_w = int(1 + (W - self.pool_w) / self.stride)

        col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad)
        col = col.reshape(-1, self.pool_h*self.pool_w)

        arg_max = np.argmax(col, axis=1)
        out = np.max(col, axis=1)
        out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2)

        self.x = x
        self.arg_max = arg_max

        return out

    def backward(self, dout):
        dout = dout.transpose(0, 2, 3, 1)
        
        pool_size = self.pool_h * self.pool_w
        dmax = np.zeros((dout.size, pool_size))
        dmax[np.arange(self.arg_max.size),self.arg_max.flatten()] = \
            dout.flatten()
        dmax = dmax.reshape(dout.shape + (pool_size,)) 
        
        dcol = dmax.reshape(dmax.shape[0] * dmax.shape[1] * dmax.shape[2], -1)
        dx = col2im(dcol,self.x.shape,self.pool_h,self.pool_w,self.stride,self.pad)
        return dx

            
if __name__ == '__main__':
    main()

↑こう書くと、↓こう表示されます

(dl_scratch) C:\Users\end0t\tmp\deep-learning-from-scratch\ch07>python foo.py
epoch: 1 train acc: 0.395 test acc: 0.38
epoch: 2 train acc: 0.802 test acc: 0.801
epoch: 3 train acc: 0.876 test acc: 0.874
epoch: 4 train acc: 0.893 test acc: 0.885
epoch: 5 train acc: 0.924 test acc: 0.898
epoch: 6 train acc: 0.92 test acc: 0.913
epoch: 7 train acc: 0.932 test acc: 0.923
epoch: 8 train acc: 0.951 test acc: 0.932
epoch: 9 train acc: 0.948 test acc: 0.934
epoch: 10 train acc: 0.954 test acc: 0.935
epoch: 11 train acc: 0.962 test acc: 0.938
epoch: 12 train acc: 0.972 test acc: 0.947
epoch: 13 train acc: 0.973 test acc: 0.947
epoch: 14 train acc: 0.983 test acc: 0.953
epoch: 15 train acc: 0.977 test acc: 0.956
epoch: 16 train acc: 0.982 test acc: 0.96
epoch: 17 train acc: 0.985 test acc: 0.959
epoch: 18 train acc: 0.988 test acc: 0.958
epoch: 19 train acc: 0.991 test acc: 0.958
epoch: 20 train acc: 0.987 test acc: 0.956
=============== Final Test Accuracy
test acc:0.956
Saved Network Parameters!