当前位置: 首页 > >

Sarsa(λ) and Q(λ) in Tabular Case

发布时间:

‘Thanks R. S. Sutton and A. G. Barto for their great work in Reinforcement Learning: An Introduction.


Eligibility Traces in Prediction Problems

In the backward view of

TD(λ)
, there is a memory variable associated with each state, its eligibility trace. The eligibility trace for state

s
at time t is a random variable denoted

Zt(s)∈R+
. On each step, the eligibility traces for all states decay by

γλ
, and the eligibility trace for the one state visited on the step is incremented by 1:





Zt(s)={γλZt?1(s)γλZt?1(s)+1s≠Sts=St



for all nonterminal states



s
, where γ is the discount rate and



λ
is the


λ
-return

1 parameter or
trace-decay parameter. This kind of eligibility trace is called an
accumulating trace. The global TD error signal triggers proportional updates to all recently visited states, as signaled by their nonzero traces:






ΔVt(s)=αδtZt(s),?s∈S



where






δt=Rt+1+γVt(St+1)?Vt(St)



A complete prediction algorithm for on-line



TD(λ)
is given as follows:


    Initialize

    V(s)
    arbitrarily.Repeat for each episode:

      Initialize

      Z(s)=0
      for all

      s∈S
      .Repeat for each step of episode:



        A←
        action given by

        π
        for

        S
        .
        Observe reward R and the next state

        S′
        .

        δ←R+γV(S′)?V(S)


        Z(S)←Z(S)+1
        For all

        s∈S
        :



          V(s)←V(s)+αδZ(s)


          Z(s)←γλZ(s)


        S←S′
      Until

      S
      is terminal.





Sarsa(λ)

The main idea of control is simply to learn action values rather than state values. The idea in Sarsa(λ) is to apply the TD(

λ
) prediction method to state-action pairs. Let

Zt(s,a)
denote the trace for state-action pair

s,a
. Substitute state-action variables for state variables, i.e.




Qt+1(s,a)=Qt(s,a)+αδtZt(s,a),?s,a


where




δt=Rt+1+γQt(St+1,At+1)?Qt(St,At)


and




Zt(s,a)={γλZt?1(s,a)+1γλZt?1(s,a)s=St,a=Atotherwise?for all?s,a


The complete Sarsa(

λ
) algorithm is given as follows:


    Initialize

    Q(s,a)
    arbitrarily for all

    s∈S,a∈A(s)
    Repeat for each episode:



      Z(s,a)=0
      for all

      s∈S,a∈A(s)
      Repeat for each step of episode:

        Take action

        A
        , observe R,

        S′
        .Choose

        A′
        from

        S′
        using policy derived from

        Q
        .
        δ←R+γQ(S′,A′)?Q(S,A)

        Z(S,A)←Z(S,A)+1
        For all

        s∈S,a∈A(s)
        :



          Q(s,a)←Q(s,a)+αδZ(s,a)


          Z(s,a)←γλZ(s,a)


        S←S′,A←A′
      Unitl

      S
      is terminal.



Q(λ)

There are two different methods have been proposed that combine eligibility traces and Q-Learning: the Watkins’s Q(λ) and Peng’s Q(λ). Here we focus on Watkin’s Q(λ) only and give a brief description of Peng’s Q(λ).



Unlike TD(λ) and Sarsa(λ), Watkins’s Q(λ) does not look ahead all the way to the end of the episode in its backup. It only looks ahead as far as the next exploratory action. For example, suppose the first action, At+1, is exploratory. Watkins’s Q(λ) would still do the one-step update of

Qt(St,At)
toward

Rt+1+γmaxaQ(St+1,a)
. In general, if

At+n
is the first exploratory action, then the longest backup is toward




Rt+1+γRt+1+?+γn?1Rt+n+γnmaxaQ(St+n,a)


where we assume off-line updating.

The mechanistic or backward view of Watkins’s Q(λ) is also very simple. Eligibility traces are used just as in Sarsa(λ), except that they are set to zero whenever an exploratory action is taken. That is to say. First, the traces for all state-action pairs are either decayed by

γλ
or, if an exploratory action was taken, set to 0. Second, the trace corresponding to the current state and action is incremented by 1, i.e.





Zt(s,a)=IsSt?IaAt+???γλZt?1(s,a)0?if?Qt?1(St,At)=maxaQt?1(St,a)?otherwise


where

Ixy
is an identity indicator function, equal to 1 if

x=y
and 0 otherwise. The rest of the algorithm is defined by




Qt+1(s,a)=Qt(s,a)+αδtZt(s,a),?s∈S,a∈A(s)


where




δt=Rt+1+γmaxa′Qt(St+1,a′)?Qt(St,At)


The complete algorithm is given below:


    Initialize

    Q(s,a)
    arbitrarily, for all

    s∈S,a∈A(s)
    .Repeat for each episode:



      Z(s,a)=0
      for all

      s∈S,a∈A(s)
      Repeat for each step of episode:

        Take action

        A
        , observe R,

        S′
        Choose

        A′
        from

        S′
        using policy derived from

        Q
        .
        A?←argmaxaQ(S′,a), if

        A′
        ties for the max, then

        A?←A′


        δ←R+γQ(S′,A?)?Q(S,A)


        Z(S,A)←Z(S,A)+1
        For all

        s∈S,a∈A(s)
        :



          Q(s,a)←Q(s,a)+αδZ(s,a)
          If

          A′=A?
          , then

          Z(s,a)←γλZ(s,a)
          , else

          Z(s,a)←0
          .


        S←S′,A←A′
      Until

      S
      is terminal



Unfortunately, cutting off traces every time an exploratory action is taken loses much of the advantage of using eligibility traces. Peng’s Q(λ) is an alternate version of Q(λ) meant to remedy this, which can be thought of as a hybrid of Sarsa(λ) and Watkins’s Q(λ). In Peng’s Q(λ), there is no distinction between exploratory and greedy actions. Each component beckup is over many steps of actual experiences, and all but the last are capped by a final maximization over actions. For a fixed nongreedy policy, Qt converges to neither


nor

q?
, but to some hybrid of the two. However, if the policy is gradually made more greedy, then the method may still converge to

q?
.

Replacing Traces

In some cases significantly better performance can be obtained by using a slightly modified kind of trace known as replacing trace:





Zt(s)={γλZt?1(s)1s≠Sts=St


There are several possible ways to generalize replacing eligibility traces for use in control methods. In some cases, the state-action traces are updated by the following:




Zt(s,a)=?????10γλZt?1(s,a)s=St,a=Ats=St,a≠Ats≠St






  1. λ
    -return:

    Gλt=(1?λ)∑∞n=1λn?1G(n)t
    ?



友情链接: