Neural Network – HMA & DSMA

0
(0)

Neural Network ─ Hull Transferring Common (HMA) & Deviation-Scaled Transferring Common (DSMA)

☛ Makes use of HMA algorithm however this one is a variation from low-lag to zero-lag
after which... fused with the next:

☛ Jurik Filters/Smoothing and customized MA varieties
☛ Mixed with Deviation-Scaled Transferring Common Algorithm
☛ Higher & Greatest Method (Higher & APB calculation)

주의하세요: Greatest use with Volumes on Primary Chart indicator advisable for indicator set off/replace motion

Transient principle of Neural Networks:

The Neural community is an adjustable mannequin of outputs as capabilities of inputs. It consists of a number of layers:

  1. enter layer, which consists of enter knowledge
  2. hidden layer, which consists of processing nodes known as neurons
  3. output layer, which consists of 1 or a number of neurons, whose outputs are the community outputs.

All nodes of adjoining layers are interconnected. These connections are known as synapses. Each synapse has an assigned scaling coefficient, by which the info propagated by means of the synapse is multiplied. These scaling coefficient are known as weights (w[나][j][케이]). In a Feed-Ahead Neural Network (FFNN) the info is propagated from inputs to the outputs. Right here is an instance of FFNN with one enter layer, one output layer, and two hidden layers:

연결된 그림
Neural Network - HMA & DSMA 1

The topology of a FFNN is commonly abbreviated as follows: <# of inputs> - <# of neurons within the first hidden layer> - <# of neurons within the second hidden layer> -...- <# of outputs>. The above community might be known as a 4-3-3-1 community.
The info is processed by neurons in two steps, correspondingly proven throughout the circle by a summation signal and a step signal:

  1. All inputs are multiplied by the related weights and summed
  2. The ensuing sums are processed by the neuron's activation operate, whose output is the neuron output.

It's the neuron's activation operate that offers non-linearity to the neural community mannequin. With out it, there isn't a motive to have hidden layers, and the neural community turns into a linear auto-regressive (AR) mannequin.

Deviation-Scaled Transferring Common (DSMA)

아주 새로운 DSMA made by John Ehlers and featured within the July 2018 situation of TASC journal.

The DSMA is a knowledge smoothing method that acts as an exponential transferring common with a dynamic smoothing coefficient. The smoothing coefficient is mechanically up to date primarily based on the magnitude of value modifications. Within the Deviation-Scaled Transferring Common, the usual deviation from the imply is chosen to be the measure of this magnitude. The ensuing indicator offers substantial smoothing of the info even when value modifications are small whereas shortly adapting to those modifications.

The writer explains that because of its design, it has minimal lag but is ready to present appreciable smoothing. 그럼에도 불구하고, Neural Network - HMA & DSMA indicator is fused with Jurik filters/smoothing mixed with zero-lag HMA system.

연결된 그림 (확대하려면 클릭)
Click to Enlarge

Name: NN_DSMA.JPG
Size: 82 KB

Click to Enlarge

Name: NN-HMA-DSMA-Settings.JPG
Size: 90 KB
☝ I can't present any kind of assist like coding (together with supply code) and troubleshooting service.

☢ There are not any ensures that this indicator work completely or with out errors. 그러므로, use at your personal danger; I settle for no legal responsibility for system harm, monetary losses and even lack of life.

final replace:
8:00 AM
목요일, 11 십월 2018
Greenwich Imply Time (그리니치 표준시)

연결된 파일
File Type: zip Neural-Network_HMA-DSMA-Jurik.zip 137 KB | 25 다운로드

이 게시물이 얼마나 유용했나요??

평가하려면 별표를 클릭하세요.!

평균 평점 0 / 5. 투표수: 0

현재까지 투표가 없습니다! 이 게시물을 가장 먼저 평가해 보세요..

이 게시물이 귀하에게 도움이 되지 못했다니 죄송합니다!

이 게시물을 개선해 보겠습니다.!

이 게시물을 개선할 수 있는 방법을 알려주세요.?



작가: 외환 위키 팀
우리는 경험이 풍부한 Forex 트레이더 팀입니다. [2000-2023] 우리 자신의 조건에 따라 삶을 살기 위해 헌신하는 사람들. 우리의 주요 목표는 재정적 독립과 자유를 얻는 것입니다., 우리는 자기 교육을 추구하고 Forex 시장에서 자립 가능한 라이프 스타일을 달성하기 위한 수단으로 광범위한 경험을 얻었습니다..