神經網路 – HMA & 迪斯曼公司

0
(0)

Neural Network ─ Hull Transferring Common (HMA) & Deviation-Scaled Transferring Common (迪斯曼公司)

☛ Makes use of HMA algorithm however this one is a variation from low-lag to zero-lag
after which... fused with the next:

☛ Jurik Filters/Smoothing and customized MA varieties
☛ Mixed with Deviation-Scaled Transferring Common Algorithm
☛ Higher & Greatest Method (Higher & APB calculation)

請注意: Greatest use with Volumes on Primary Chart indicator advisable for indicator set off/replace motion

Transient principle of Neural Networks:

The Neural community is an adjustable mannequin of outputs as capabilities of inputs. It consists of a number of layers:

  1. enter layer, which consists of enter knowledge
  2. hidden layer, which consists of processing nodes known as neurons
  3. output layer, which consists of 1 or a number of neurons, whose outputs are the community outputs.

All nodes of adjoining layers are interconnected. These connections are known as synapses. Each synapse has an assigned scaling coefficient, by which the info propagated by means of the synapse is multiplied. These scaling coefficient are known as weights (w[i][j][k]). In a Feed-Ahead Neural Network (FFNN) the info is propagated from inputs to the outputs. Right here is an instance of FFNN with one enter layer, one output layer, and two hidden layers:

連接圖片
Neural Network - HMA & DSMA 1

The topology of a FFNN is commonly abbreviated as follows: <# of inputs> - <# of neurons within the first hidden layer> - <# of neurons within the second hidden layer> -...- <# of outputs>. The above community might be known as a 4-3-3-1 community.
The info is processed by neurons in two steps, correspondingly proven throughout the circle by a summation signal and a step signal:

  1. All inputs are multiplied by the related weights and summed
  2. The ensuing sums are processed by the neuron's activation operate, whose output is the neuron output.

It's the neuron's activation operate that offers non-linearity to the neural community mannequin. With out it, there isn't a motive to have hidden layers, and the neural community turns into a linear auto-regressive (AR) mannequin.

Deviation-Scaled Transferring Common (迪斯曼公司)

  價格通道與I趨勢策略

The brand new 迪斯曼公司 made by John Ehlers and featured within the July 2018 situation of TASC journal.

The DSMA is a knowledge smoothing method that acts as an exponential transferring common with a dynamic smoothing coefficient. The smoothing coefficient is mechanically up to date primarily based on the magnitude of value modifications. Within the Deviation-Scaled Transferring Common, the usual deviation from the imply is chosen to be the measure of this magnitude. The ensuing indicator offers substantial smoothing of the info even when value modifications are small whereas shortly adapting to those modifications.

The writer explains that because of its design, it has minimal lag but is ready to present appreciable smoothing. 儘管如此, 神經網路 - HMA & DSMA indicator is fused with Jurik filters/smoothing mixed with zero-lag HMA system.

連接圖片 (點擊放大)
Click to Enlarge

Name: NN_DSMA.JPG
Size: 82 KB

Click to Enlarge

Name: NN-HMA-DSMA-Settings.JPG
Size: 90 KB
☝ I can't present any kind of assist like coding (together with supply code) and troubleshooting service.

☢ There are not any ensures that this indicator work completely or with out errors. 所以, use at your personal danger; I settle for no legal responsibility for system harm, monetary losses and even lack of life.

final replace:
8:00 AM
Thursday, 11 十月 2018
Greenwich Imply Time (GMT)

連接檔案
File Type: zip Neural-Network_HMA-DSMA-Jurik.zip 137 知識庫 | 25 下載

  100% bad luck or bad timing?

這篇文章有多有用?

點擊一顆星即可對其進行評分!

平均評分 0 / 5. 計票數: 0

目前還沒有投票! 成為第一個評價這篇文章的人.

很抱歉這篇文章對您沒有用!

讓我們改進這篇文章!

告訴我們如何改進這篇文章?



作者: 外匯維基團隊
我們是一支由經驗豐富的外匯交易者組成的團隊 [2000-2023] 致力於用自己的方式生活的人. 我們的首要目標是實現財務獨立和自由, 我們追求自我教育,並在外匯市場中獲得了豐富的經驗,以此作為實現自我永續生活方式的手段.