- اکتوبر 11, 2018
- کی طرف سے پوسٹ کیا گیا: فاریکس وکی ٹیم
- قسم: فاریکس ٹریڈنگ سسٹم
※ Neural Network ─ Hull Transferring Common (HMA) & Deviation-Scaled Transferring Common (DSMA) ※
☛ Makes use of HMA algorithm however this one is a variation from low-lag to zero-lag
after which... fused with the next:
↓
☛ Jurik Filters/Smoothing and customized MA varieties
☛ Mixed with Deviation-Scaled Transferring Common Algorithm
☛ Higher & Greatest Method (Higher & APB calculation)
Be aware: Greatest use with Volumes on Primary Chart indicator ─ advisable for indicator set off/replace motion
Transient principle of Neural Networks:
The Neural community is an adjustable mannequin of outputs as capabilities of inputs. It consists of a number of layers:
- enter layer, which consists of enter knowledge
- hidden layer, which consists of processing nodes known as neurons
- output layer, which consists of 1 or a number of neurons, whose outputs are the community outputs.
ملحقہ تہوں کے تمام نوڈس آپس میں جڑے ہوئے ہیں۔. These connections are known as synapses. ہر Synapse میں ایک تفویض کردہ اسکیلنگ گتانک ہوتا ہے۔, by which the info propagated by means of the synapse is multiplied. These scaling coefficient are known as weights (w[i][جے][ک]). ایک میں فیڈ-آگے نیورل نیٹ ورک (ایف ایف این این) the info is propagated from inputs to the outputs. یہاں FFNN کی ایک مثال ہے جس میں ایک انٹر لیئر ہے۔, one output layer, and two hidden layers:
The topology of a FFNN is commonly abbreviated as follows: <# آدانوں کی> - <# پہلی پوشیدہ پرت کے اندر نیوران کی> - <# دوسری پوشیدہ پرت کے اندر نیوران کی> -...- <# آؤٹ پٹ کے>. The above community might be known as a 4-3-3-1 برادری.
The info is processed by neurons in two steps, correspondingly proven throughout the circle by a summation signal and a step signal:
- All inputs are multiplied by the related weights and summed
- The ensuing sums are processed by the neuron's activation operate, whose output is the neuron output.
It's the neuron's activation operate that offers non-linearity to the neural community mannequin. With out it, there isn't a motive to have hidden layers, and the neural community turns into a linear auto-regressive (AR) mannequin.
✜ Deviation-Scaled Transferring Common (DSMA) ✜
The brand new DSMA made by John Ehlers and featured within the July 2018 situation of TASC journal.
The DSMA is a knowledge smoothing method that acts as an exponential transferring common with a dynamic smoothing coefficient. The smoothing coefficient is mechanically up to date primarily based on the magnitude of value modifications. Within the Deviation-Scaled Transferring Common, the usual deviation from the imply is chosen to be the measure of this magnitude. The ensuing indicator offers substantial smoothing of the info even when value modifications are small whereas shortly adapting to those modifications.
The writer explains that because of its design, it has minimal lag but is ready to present appreciable smoothing. Nonetheless, Neural Network - HMA & DSMA indicator is fused with Jurik filters/smoothing mixed with zero-lag HMA system.
☝ I can't present any kind of assist like coding (together with supply code) and troubleshooting service.
☢ There are not any ensures that this indicator work completely or with out errors. اس لیے, use at your personal danger; میں نظام کو نقصان پہنچانے کی کوئی قانونی ذمہ داری قبول نہیں کرتا ہوں۔, مالی نقصان اور یہاں تک کہ زندگی کی کمی.
final replace:
8:00 AM
Thursday, 11 اکتوبر 2018
گرین وچ کا مطلب وقت (GMT)