Artificial Intelligence

(23.03.07)Python ํ”„๋กœ๊ทธ๋ž˜๋ฐ: Tensorflow-ํšŒ๊ท€๋ถ„์„, ReLU(Rectified Linear Unit)

ํ”„๋กœ๊ทธ๋ž˜๋จธ ์˜ค์›” 2023. 3. 23.

Deep Learning
Neuron
Activation (์ถœ๋ ฅ, ํ™œ์„ฑํ•จ์ˆ˜)
. ๊ณ„๋‹จํ•จ์ˆ˜
- ์‹œ๊ทธ๋ชจ์ด๋“œ : ์ด์ง„๋ถ„๋ฅ˜์‹œ ์ถœ๋ ฅ ๋ ˆ์ด์–ด์— ์‚ฌ์šฉ
- ReLU(Rectified Linear Unit) :  ๊ฐ€์žฅ ๋งŽ์ด ํ™œ์šฉ๋˜๋Š” ์ถœ๋ ฅํ•จ์ˆ˜ x<=0, ->0 , x>0 -> x
- Softmax :  ๋‹ค์ค‘๋ถ„๋ฅ˜์—์„œ ํ™œ์šฉ
Loss (์†์‹คํ•จ์ˆ˜) : Label, Prediction(์ถ”์ •๊ฐ’)
- MSE, MAE, Binary_CrossEntropy, CrossEntrop Gradient Descentu)
- ๊ฒฝ์‚ฌํ•˜๊ฐ•๋ฒ•์˜ ๋‹ค์–‘ํ•œ ํŒŒ์ƒ ์•Œ๊ณ ๋ฆฌ์ฆ˜(Optimizer)
- sGD(Stochastic Gradient Descent, ํ™•๋ฅ ์  ๊ฒฝ์‚ฌํ•˜๊ฐ•๋ฒ•)
- Adam

Backprop agation(์˜ค์ฐจ ์—ญ์ „ํŒŒ)

ํŽธํ–ฅ์น˜, ๊ฐ€์ค‘์น˜

 

 

์˜ค๋ฒ„ ํ”ผํŒ… overfitting (๊ณผ์ ํ•ฉ :์ง€๋‚˜์น˜๊ฒŒ ํ•™์Šตํ•˜์—ฌ ์‹ค์ œ ๋ฐ์ดํ„ฐ์— ๋…ธ์ถœ๋์„ ๋•Œ ์‚ฌ๊ณ ๋ฅผ ๋ชปํ•จ)

์–ธ๋” ํ”ผํŒ… underfitting (๊ณผ์†Œ์ ํ•ฉ :  ๋„ˆ๋ฌด ์ ๊ฒŒ ํ•™์Šตํ•œ ๊ฒฝ์šฐ)

 

 

 

 

 


 

๋Œ“๊ธ€