๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ

1๏ธโƒฃ AI•DS/๐Ÿ“’ Deep learning25

[์ธ๊ณต์ง€๋Šฅ] CNN ๐Ÿ“Œ ๊ต๋‚ด '์ธ๊ณต์ง€๋Šฅ' ์ˆ˜์—…์„ ํ†ตํ•ด ๊ณต๋ถ€ํ•œ ๋‚ด์šฉ์„ ์ •๋ฆฌํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. 1๏ธโƒฃ CNN โ‘  Architecture ๐Ÿ‘€ Convolution Neural Network ์ด๋ฏธ์ง€ ์ธ์‹, ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—์„œ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๋Š” ๋ชจ๋ธ์ด๋‹ค. CNN ์€ ์ „๊ฒฐํ•ฉ ๊ตฌ์กฐ๊ฐ€ ์•„๋‹ˆ๋‹ค ๐Ÿ‘‰ ์‹œ๋ƒ…์Šค ์—ฐ๊ฒฐ ๊ฐœ์ˆ˜๊ฐ€ ์ ๋‹ค ๐Ÿ‘‰ weight ๊ฐœ์ˆ˜๊ฐ€ ์ ๋‹ค ๐Ÿ’จ ์—ฐ์‚ฐ๋Ÿ‰์ด ์ ๋‹ค FC layer ๋ณด๋‹ค ๋” ํšจ๊ณผ์ ์œผ๋กœ feature extraction ์„ ์ง„ํ–‰ํ•˜๊ณ  ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ธ๋‹ค. โ‘ก ImageNet competition ๐Ÿ‘€ ImageNet ๋ฐ์ดํ„ฐ์…‹ ๋ช…์นญ์œผ๋กœ 14000๋งŒ๊ฐœ์˜ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์…‹์ด๋‹ค. 1000๊ฐœ์˜ ์‚ฌ๋ฌผ ์ข…๋ฅ˜์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€๊ฐ€ ๋‹ด๊ฒจ์ ธ ์žˆ๋‹ค. ์ด๋ฏธ์ง€ ์†์— ์กด์žฌํ•˜๋Š” ๊ฐ ์‚ฌ๋ฌผ์˜ ์ด๋ฆ„์„ ์–ผ๋งˆ๋‚˜ ์ž˜ ๋งž์ถ”๋Š”๊ฐ€์— ๊ด€ํ•œ ํ•™์ˆ ๋Œ€ํšŒ ILSVRC ์—.. 2022. 4. 23.
[์ธ๊ณต์ง€๋Šฅ] DNN ๐Ÿ“Œ ๊ต๋‚ด '์ธ๊ณต์ง€๋Šฅ' ์ˆ˜์—…์„ ํ†ตํ•ด ๊ณต๋ถ€ํ•œ ๋‚ด์šฉ์„ ์ •๋ฆฌํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์š”์•ฝ โ‘  Universal approximation theorem โ‘ก Activation function โ‘ข DNN : why deep? โ‘ฃ Random Initialization - tanh, ReLU : ๊ฐ€์ค‘์น˜๊ฐ€ 0์œผ๋กœ ๊ณ ์ • - sigmoid : ๊ฐ€์ค‘์น˜๊ฐ€ suboptimal matrix ํ˜•ํƒœ๋กœ ์—…๋ฐ์ดํŠธ โ‘ค Application example : Face recognition 1๏ธโƒฃ Universal approximation thm , activation function โ‘  Universal approximation theorems ๐Ÿ‘€ ์ถฉ๋ถ„ํ•œ ๊ฐ€์ค‘์น˜๊ฐ€ ์ ์šฉ๋˜์—ˆ์„ ๋•Œ, MLP ๋กœ ์–ด๋– ํ•œ ํ•จ์ˆ˜๋„ ๊ทผ์‚ฌ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. hidden layer ์—.. 2022. 4. 23.
[์ธ๊ณต์ง€๋Šฅ] MLP ๐Ÿ“Œ ๊ต๋‚ด '์ธ๊ณต์ง€๋Šฅ' ์ˆ˜์—…์„ ํ†ตํ•ด ๊ณต๋ถ€ํ•œ ๋‚ด์šฉ์„ ์ •๋ฆฌํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์š”์•ฝ โ‘  multiple layer ๊ฐ€ ํ•„์š”ํ•œ ์ด์œ  - XOR ๋ฌธ์ œ - feature extraction and classification โ‘ก Multi-layer perceptron - gradient descent - backpropagation algorithm 1๏ธโƒฃ MLP โ‘  MLP๋ž€ ๐Ÿ‘€ Perceptron vs Multi layer Perceptron Perceptron : ๋‰ด๋Ÿฐ์ด ํ•˜๋‚˜๋งŒ ์กด์žฌํ•˜๋Š” ๊ฒฝ์šฐ MLP (multi-layer perceptron) : ๋‰ด๋Ÿฐ(ํผ์…‰ํŠธ๋ก )์ด ์—ฌ๋Ÿฌ๊ฐœ๊ฐ€ ์กด์žฌํ•˜๋ฉฐ ์ธต์„ ์ด๋ฃฌ๋‹ค. layer : ๋‰ด๋Ÿฐ๊ณผ ๋‰ด๋Ÿฐ ์‚ฌ์ด์˜ ์‹œ๋ƒ…์Šค ์ธต์„ ์ง€์นญํ•จ ๐Ÿค” ๋‰ด๋Ÿฐ์ด ์—ด๊ฑฐ๋œ ๋ถ€๋ถ„์„ layer ๋ผ๊ณ  ์นญํ•˜๋ฉฐ hidden layer.. 2022. 4. 21.
[์ธ๊ณต์ง€๋Šฅ] Basic Neural Network ๐Ÿ“Œ ๊ต๋‚ด '์ธ๊ณต์ง€๋Šฅ' ์ˆ˜์—…์„ ํ†ตํ•ด ๊ณต๋ถ€ํ•œ ๋‚ด์šฉ์„ ์ •๋ฆฌํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๐Ÿ“ ๋ชฉ์ฐจ โ‘  ํŒŒ๋ธ”๋กœํ”„์˜ ๊ฐœ ์˜ˆ์ œ perceptron ๋ชจ๋ธ๋ง โ‘ก Training a Perceptron - Gradient desecent - example โ‘ข batch computing, epoch, hyperparameter 1๏ธโƒฃ Neuron ์œผ๋กœ 'ํŒŒ๋ธ”๋กœํ”„์˜ ๊ฐœ' ๋ชจ๋ธ๋ง ํ•ด๋ณด๊ธฐ โ‘  Example ๐Ÿ‘€ ์šฉ์–ด ์ •๋ฆฌ Activation = Feature Synaptic weight = filter = kernel : ์‹œ๋ƒ…์Šค ๊ฐ•๋„ Thresholding = Activation Function ๐Ÿ‘€ Training ๊ฐ€์ค‘์น˜๋ฅผ ์กฐ์ ˆํ•˜์—ฌ ํ•™์Šต์„ ์ง„ํ–‰ํ•œ๋‹ค. 2๏ธโƒฃ Training a Perceptron โ‘  Neuron ์˜ ์ˆ˜ํ•™ ๊ณต์‹ โ‘ก Gradient.. 2022. 4. 21.
[์ธ๊ณต์ง€๋Šฅ] Introduction to AI/Deep learning ๐Ÿ“Œ ๊ต๋‚ด '์ธ๊ณต์ง€๋Šฅ' ์ˆ˜์—…์„ ํ†ตํ•ด ๊ณต๋ถ€ํ•œ ๋‚ด์šฉ์„ ์ •๋ฆฌํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. 1๏ธโƒฃ Introduction to AI โ‘  AI Definition ๐Ÿ‘€ Rationality ๋ชฉํ‘œ๋ฅผ ์ตœ๋Œ€ํ•œ์œผ๋กœ ๋‹น์„คํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋ฆฌ์ ์ธ ํ–‰๋™ (act rationally) ์„ ํ•˜๋Š” ๊ฒƒ = AI ์—ฐ์‚ฐ์„ ํ†ตํ•ด ํ•ฉ๋ฆฌ์ ์ธ ํ–‰๋™์„ ํ•œ๋‹ค๋Š” ์ธก๋ฉด์—์„œ AI ๋Š” ์ปดํ“จํ„ฐ ๊ณผํ•™์˜ ๋ฟŒ๋ฆฌ๋‹ค. ๐Ÿ‘€ ์•ฝ์ธ๊ณต์ง€๋Šฅ vs ๊ฐ•์ธ๊ณต์ง€๋Šฅ Narrow AI : ์ฒด์Šค๋‘๊ธฐ, ์—˜๋ ˆ๋ฒ ์ดํ„ฐ ์กฐ์ ˆํ•˜๊ธฐ ๋“ฑ๊ณผ ๊ฐ™์€ ํŠน์ •ํ•œ (specific) ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ์ธ๊ณต์ง€๋Šฅ Strong AI : ์ธ๊ฐ„์˜ ์ธ์ง€ ๋Šฅ๋ ฅ์— ๋Œ€ํ•œ ๋ฒ”์œ„๊นŒ์ง€ ๋Šฅ๊ฐ€ํ•˜๋Š” ์ธ๊ณต์ง€๋Šฅ (act & thinking) โ‘ก History of AI ๐Ÿ‘€ ์—ญ์‚ฌ 50/60๋…„๋Œ€ : Neural network ์˜ ๋“ฑ์žฅ (์ธ๊ณต๋‰ด๋Ÿฐ์˜ ๊ฐœ๋…) ๐Ÿ’จ ๋ชจํ˜ธ์„ฑ/๋ณต.. 2022. 4. 21.
728x90