๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ
1๏ธโƒฃ AI•DS/๐Ÿ“’ Deep learning

[๋”ฅ๋Ÿฌ๋‹ ํŒŒ์ดํ† ์น˜ ๊ต๊ณผ์„œ] 5์žฅ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง โ… 

by isdawell 2022. 10. 6.
728x90

https://colab.research.google.com/drive/1uB-7ckV-Mrh0Zfugv9OIm7QuM_j2OLg5?usp=sharing 

 

[๋”ฅ๋Ÿฌ๋‹ ํŒŒ์ดํ† ์น˜ ๊ต๊ณผ์„œ] chapter 05 ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง.ipynb

Colaboratory notebook

colab.research.google.com

 

 

 

 

 

 

1๏ธโƒฃ  ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง 


 

๐Ÿ”น  ํ•ฉ์„ฑ๊ณฑ ์ธต์˜ ํ•„์š”์„ฑ 

 

๐ŸŒ   ์—ฐ์‚ฐ๋Ÿ‰ ๊ฐ์†Œ 

 

•  ์ด๋ฏธ์ง€ ์ „์ฒด๋ฅผ ํ•œ ๋ฒˆ์— ๊ณ„์‚ฐํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹Œ, ๊ตญ์†Œ์ ์ธ ๋ถ€๋ถ„์„ ๊ณ„์‚ฐํ•จ์œผ๋กœ์จ ์‹œ๊ฐ„๊ณผ ์ž์›์„ ์ ˆ์•ฝํ•˜๊ณ  ์ด๋ฏธ์ง€์˜ ์„ธ๋ฐ€ํ•œ ๋ถ€๋ถ„๊นŒ์ง€ ๋ถ„์„ํ•  ์ˆ˜ ์žˆ๋Š” ์‹ ๊ฒฝ๋ง 

 

 

๐ŸŒ   ์ด๋ฏธ์ง€/์˜์ƒ ์ฒ˜๋ฆฌ์— ์œ ์šฉํ•œ ๊ตฌ์กฐ 

 

•  1์ฐจ์› ๋ฒกํ„ฐ๋กœ ํŽผ์ณ์„œ ๊ฐ€์ค‘์น˜๋กœ ๊ณ„์‚ฐํ•˜์ง€ ์•Š๊ณ , ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์˜ ๊ณต๊ฐ„์  ๊ตฌ์กฐ (์˜ˆ. 3x3) ๋ฅผ ์œ ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ํ•ฉ์„ฑ๊ณฑ์ธต์ด ์กด์žฌํ•œ๋‹ค. 

•  ๋‹ค์ฐจ์› ๋ฐฐ์—ด ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋„๋ก ๊ตฌ์„ฑ ๐Ÿ‘‰ ์ปฌ๋Ÿฌ ์ด๋ฏธ์ง€ ๊ฐ™์€ ๋‹ค์ฐจ์› ๋ฐฐ์—ด ์ฒ˜๋ฆฌ์— ํŠนํ™”๋˜์–ด ์žˆ์Œ 

 

 

 

๐Ÿ”น  ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ตฌ์กฐ 

 

https://thebook.io/080289/ch05/01/02/

 

1. ํ•ฉ์„ฑ๊ณฑ์ธต๊ณผ ํ’€๋ง์ธต์„ ๊ฑฐ์น˜๋ฉฐ ์ž…๋ ฅ ์ด๋ฏธ์ง€์˜ ์ฃผ์š” feature vector ๋ฅผ ์ถ”์ถœํ•œ๋‹ค. 

2. ์ถ”์ถœ๋œ ์ฃผ์š” ํŠน์„ฑ ๋ฒกํ„ฐ๋“ค์ด ์™„์ „์—ฐ๊ฒฐ์ธต์„ ๊ฑฐ์น˜๋ฉฐ 1์ฐจ์› ๋ฒกํ„ฐ๋กœ ๋ณ€ํ™˜๋œ๋‹ค. 

3. ์ถœ๋ ฅ์ธต์—์„œ ํ™œ์„ฑํ™” ํ•จ์ˆ˜์ธ ์†Œํ”„ํŠธ๋งฅ์Šค ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ์ตœ์ข… ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋œ๋‹ค. 

 

 

 

โ‘  ์ž…๋ ฅ์ธต 

 

•  ์ž…๋ ฅ ์ด๋ฏธ์ง€๊ฐ€ ์ตœ์ดˆ๋กœ ๊ฑฐ์น˜๊ฒŒ ๋˜๋Š” ๊ณ„์ธต 

•  3์ฐจ์› ๋ฐ์ดํ„ฐ : ๋†’์ด, ๋„ˆ๋น„, ์ฑ„๋„ 

•  ์ฑ„๋„์€ ์ด๋ฏธ์ง€๊ฐ€ gray scale ์ด๋ฉด 1 ๊ฐ’์„ ๊ฐ€์ง€๊ณ , ์ปฌ๋Ÿฌ RGB ์ด๋ฉด 3 ๊ฐ’์„ ๊ฐ€์ง„๋‹ค. 

 

 

https://thebook.io/080289/ch05/01/02-01/

 

→  (4, 4, 3) ํ˜•ํƒœ๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค. 

 

 

 

โ‘ก ํ•ฉ์„ฑ๊ณฑ์ธต 

 

•  ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์—์„œ ํŠน์„ฑ์„ ์ถ”์ถœํ•˜๋Š” ์—ญํ•  

•  ํ•„ํ„ฐ ( = ์ปค๋„) ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ํŠน์„ฑ์„ ์ถ”์ถœํ•˜๊ฒŒ ๋˜๋Š”๋ฐ, ์ถ”์ถœํ•œ ๊ฒฐ๊ณผ๋ฌผ์„ feature map ์ด๋ผ ๋ถ€๋ฅธ๋‹ค. 

•  ์ปค๋„์ด ์ด๋ฏธ์ง€์˜ ๋ชจ๋“  ์˜์—ญ์„ ํ›‘์œผ๋ฉฐ ํŠน์„ฑ์„ ์ถ”์ถœํ•˜๋Š”๋ฐ, 3x3 , 5x5 ํฌ๊ธฐ๋กœ ์ ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ด๋‹ค. 

•  stride ๋ผ๋Š” ์ง€์ •๋œ ๊ฐ„๊ฒฉ์— ๋”ฐ๋ผ ์ˆœ์ฐจ์ ์œผ๋กœ ์ด๋™ํ•œ๋‹ค. 

 

(์ด๋™ ๊ณผ์ •์€ ๊ต์žฌ ๊ทธ๋ฆผ ์ฐธ๊ณ ํ•˜๊ธฐ) 

 

•  stride ๊ฐ„๊ฒฉ๋งŒํผ ์ˆœํšŒํ•˜๋ฉด์„œ ๋ชจ๋“  ์ž…๋ ฅ๊ฐ’๊ณผ ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ์œผ๋กœ ์ƒˆ๋กœ์šด ํŠน์„ฑ ๋งต์„ ๋งŒ๋“ค๊ฒŒ ๋˜๋ฉฐ, ์›๋ณธ ์ด๋ฏธ์ง€์— ๋น„ํ•ด ํฌ๊ธฐ๊ฐ€ ์ค„์–ด๋“ ๋‹ค. 

 

 

•  ์ปฌ๋Ÿฌ ์ด๋ฏธ์ง€ ํ•ฉ์„ฑ๊ณฑ 

 

 

→ ํ•„ํ„ฐ ์ฑ„๋„์ด 3 

→ RGB ๊ฐ๊ฐ์— ์„œ๋กœ ๋‹ค๋ฅธ ๊ฐ€์ค‘์น˜๋กœ ํ•ฉ์„ฑ๊ณฑ์„ ์ ์šฉํ•œ ํ›„ ์ด๋ฅผ ๋”ํ•ด์ค€๋‹ค๋Š” ์  โญโญ

→ ํ•„ํ„ฐ ์ฑ„๋„์ด 3์ด๋ผ๊ณ  ํ•„ํ„ฐ ๊ฐœ์ˆ˜๋„ 3๊ฐœ๋ผ๊ณ  ์˜คํ•ดํ•˜๊ธฐ ์‰ฝ์ง€๋งŒ, ํ•„ํ„ฐ ๊ฐœ์ˆ˜๋Š” ํ•œ ๊ฐœ๋ผ๋Š”์ !! (์ด๋ฏธ์ง€ 1๊ฐœ๊ฐ€ RGB ๋กœ ํ‘œํ˜„๋œ ๊ฒƒ๊ณผ ๋น„์Šทํ•˜๋‹ค๊ณ  ์ƒ๊ฐํ•˜๋ฉด ๋จ) 

 

 

•  ํ•„ํ„ฐ ๊ฐœ์ˆ˜๊ฐ€ 2๊ฐœ ์ด์ƒ์ธ ๊ฒฝ์šฐ 

 

https://thebook.io/080289/ch05/01/02-06/

 

→ ํ•„ํ„ฐ ๊ฐ๊ฐ์€ ํŠน์„ฑ ์ถ”์ถœ ๊ฒฐ๊ณผ์˜ ์ฑ„๋„์ด ๋œ๋‹ค. (4๊ฐœ์˜ ํ•„ํ„ฐ ๐Ÿ‘‰ ํŠน์„ฑ 2x2x4

 

 

โ‘ข ํ’€๋ง์ธต

 

•  ํ•ฉ์„ฑ๊ณฑ๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ ํŠน์„ฑ ๋งต์˜ ์ฐจ์›์„ ๋‹ค์šด ์ƒ˜ํ”Œ๋ง (์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์ถ•์†Œํ•˜๋Š” ๊ฒƒ) ํ•˜์—ฌ ์—ฐ์‚ฐ๋Ÿ‰์„ ๊ฐ์†Œ์‹œํ‚ค๊ณ  ์ฃผ์š” ํŠน์„ฑ ๋ฒกํ„ฐ๋ฅผ ์ถ”์ถœํ•˜์—ฌ ํ•™์Šต์„ ํšจ๊ณผ์ ์œผ๋กœ ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๋Š” ๋ ˆ์ด์–ด 

 

(1) Max pooling : ๋Œ€์ƒ ์˜์—ญ์—์„œ ์ตœ๋Œ€๊ฐ’์„ ์ถ”์ถœ 

(2) Average pooling : ๋Œ€์ƒ ์˜์—ญ์—์„œ ํ‰๊ท ์„ ๋ฐ˜ํ™˜ 

 

→ ๋Œ€๋ถ€๋ถ„ ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ์—์„œ ์ตœ๋Œ€ ํ’€๋ง์ด ์‚ฌ์šฉ 

ํ‰๊ท  ํ’€๋ง์€ ๊ฐ ์ปค๋„ ๊ฐ’์„ ํ‰๊ท ํ™” ์‹œํ‚ค์–ด ์ค‘์š”ํ•œ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ–๋Š” ๊ฐ’์˜ ํŠน์„ฑ์ด ํฌ๋ฏธํ•ด์งˆ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ 

 

 

โ‘ฃ ์™„์ „์—ฐ๊ฒฐ์ธต

 

 

•  ํ•ฉ์„ฑ์ธต+ํ’€๋ง์ธต์œผ๋กœ ์ฐจ์›์ด ์ถ•์†Œ๋œ feature map ์€ ์ตœ์ข…์ ์œผ๋กœ fully-connected layer ๋กœ ์ „๋‹ฌ๋˜์–ด ์ด๋ฏธ์ง€๋ฅผ 3์ฐจ์› ๋ฒกํ„ฐ์—์„œ 1์ฐจ์› ๋ฒกํ„ฐ๋กœ ํŽผ์นœ๋‹ค. 

 

 

 

 

โ‘ค ์ถœ๋ ฅ์ธต 

 

•  ์†Œํ”„ํŠธ๋งฅ์Šค ํ™œ์„ฑํ™” ํ•จ์ˆ˜๊ฐ€ ์‚ฌ์šฉ๋˜์–ด, ์ž…๋ ฅ๊ฐ’์„ 0~1 ์‚ฌ์ด์˜ ๊ฐ’์œผ๋กœ ์ถœ๋ ฅํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋ฏธ์ง€๊ฐ€ ๊ฐ ๋ ˆ์ด๋ธ”์— ์†ํ•  ํ™•๋ฅ  ๊ฐ’์„ ์ถœ๋ ฅํ•˜์—ฌ, ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ๊ฐ’์„ ๊ฐ–๋Š” ๋ ˆ์ด๋ธ”์ด ์ตœ์ข… ๊ฐ’์œผ๋กœ ์„ ์ •๋œ๋‹ค. 

 

 

 

 

๐Ÿ”น  1D, 2D, 3D ํ•ฉ์„ฑ๊ณฑ 

 

https://yjjo.tistory.com/8

 

 

์ด๋™ํ•˜๋Š” ๋ฐฉํ–ฅ์˜ ์ˆ˜์™€ ์ถœ๋ ฅ ํ˜•ํƒœ์— ๋”ฐ๋ผ 1D, 2D, 3D ๋กœ ํ•ฉ์„ฑ๊ณฑ์„ ๋ถ„๋ฅ˜ํ•  ์ˆ˜ ์žˆ๋‹ค. 

 

์ž…๋ ฅ๋ฐ์ดํ„ฐ์˜ ์ฐจ์›์ด ์•„๋‹Œ, ํ•„ํ„ฐ์˜ ์ง„ํ–‰ ๋ฐฉํ–ฅ์˜ ์ฐจ์›์˜ ์ˆ˜์— ๋”ฐ๋ผ 1D, 2D, 3D ๋ฅผ ๊ตฌ๋ถ„ํ•œ๋‹ค! 

 

 

โ‘  1D ํ•ฉ์„ฑ๊ณฑ 

 

ํ•„ํ„ฐ : (k,k) 

 

•  ํ•„ํ„ฐ๊ฐ€ ์‹œ๊ฐ„์„ ์ถ•์œผ๋กœ ์ขŒ์šฐ๋กœ๋งŒ ์ด๋™ํ•  ์ˆ˜ ์žˆ๋Š” ํ•ฉ์„ฑ๊ณฑ 

•  ์ฃผ๋กœ NLP ๋ถ„์•ผ์—์„œ sequence data ์— ์ ์šฉํ•˜์—ฌ ์“ฐ์ธ๋‹ค. 

•  ์ž…๋ ฅ : [1,1,1,1,1]   , ํ•„ํ„ฐ : [0.25, 0.5, 0.25] 

 

→ stride = 1 ์ด๋ผ๋ฉด ํ•„ํ„ฐ๊ฐ€ ๊ฐ๊ฐ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์ด๋™ํ•˜๋ฉด์„œ ํ•ฉ์„ฑ๊ณฑ์„ ํ•˜์—ฌ ์ถœ๋ ฅ์„ [1,1,1] ๋กœ ๋„์ถœํ•œ๋‹ค. ์ถœ๋ ฅ ํ˜•ํƒœ๊ฐ€ 1D ์˜ ๋ฐฐ์—ด์ด ๋˜๋ฉฐ ๊ทธ๋ž˜ํ”„ ๊ณก์„ ์„ ์™„ํ™”ํ•˜๊ณ ์ž ํ•  ๋•Œ ๋งŽ์ด ์‚ฌ์šฉํ•œ๋‹ค. 

 

•  ์ž…๋ ฅ : W , ํ•„ํ„ฐ : kxk , ์ถœ๋ ฅ : W

 

 

 

โ‘ก 2D ํ•ฉ์„ฑ๊ณฑ 

 

 

•  ๋ณดํ†ต ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜๋Š” ํ•ฉ์„ฑ๊ณฑ ํ˜•ํƒœ์ด๋‹ค. 

•  ํ•„ํ„ฐ๊ฐ€ ๋†’์ด์™€ ๋„ˆ๋น„ ๋ฐฉํ–ฅ ๋‘ ๊ฐœ๋กœ ์›€์ง์ด๋Š” ํ˜•ํƒœ๋ฅผ ์˜๋ฏธํ•œ๋‹ค. 

•  ์ž…๋ ฅ : (W,H) , ํ•„ํ„ฐ : (k,k) , ์ถœ๋ ฅ (W,H)

 

 

 

โ‘ข 3D ํ•ฉ์„ฑ๊ณฑ 

 

 

•  ํ•„ํ„ฐ๊ฐ€ ๋†’์ด, ๋„ˆ๋น„, ๊นŠ์ด ๋ฐฉํ–ฅ (x,y,z) ์œผ๋กœ ์›€์ง์ผ ์ˆ˜ ์žˆ๋Š” ํ˜•ํƒœ๋ฅผ ์˜๋ฏธํ•œ๋‹ค. 

•  ์ž…๋ ฅ : (W,H,L) , ํ•„ํ„ฐ : (k,k,d) , ์ถœ๋ ฅ : (W,H,L) , d< L

•  They are helpful in event detection in videos, 3D medical images : 3์ฐจ์› ๋ฉ”๋””์ปฌ ์ด๋ฏธ์ง€, ๋น„๋””์˜ค์—์„œ ์ด๋ฒคํŠธ ๊ฐ์ง€์— ํšจ๊ณผ์ ์ธ ํ˜•ํƒœ 

 

 

 

โ‘ฃ 3D ์ž…๋ ฅ์„ ๊ฐ–๋Š” 2D ํ•ฉ์„ฑ๊ณฑ 

 

 

•  ์ž…๋ ฅ์ด 3D ํ˜•ํƒœ์ž„์—๋„ ์ถœ๋ ฅ ํ˜•ํƒœ๊ฐ€ 2D ํ–‰๋ ฌ ํ˜•ํƒœ๋ฅผ ์ทจํ•˜๋Š” ๊ฒฝ์šฐ

•  ์ž…๋ ฅ : (W,H,L)  , ํ•„ํ„ฐ : (k,k,L) , ์ถœ๋ ฅ : (W,H) 

•  ๋Œ€ํ‘œ์ ์ธ ์‚ฌ๋ก€ : LeNet-5, VGG 

 

 

 

โ‘ค 1x1 ํ•ฉ์„ฑ๊ณฑ 

 

 

•  3D ํ˜•ํƒœ๋กœ ์ž…๋ ฅ๋œ ์ด๋ฏธ์ง€๋ฅผ, ํ•„ํ„ฐ (1x1xL) ๋ฅผ ์ ์šฉํ•˜์—ฌ 2D ํ˜•ํƒœ๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค. 

•  ์ž…๋ ฅ : (W,H,L) , ํ•„ํ„ฐ : (1,1,L) , ์ถœ๋ ฅ : (W,H) 

•  1x1 ํ•ฉ์„ฑ๊ณฑ์—์„œ ์ฑ„๋„ ์ˆ˜๋ฅผ ์กฐ์ •ํ•ด ์—ฐ์‚ฐ๋Ÿ‰์ด ๊ฐ์†Œ๋˜๋Š” ํšจ๊ณผ๊ฐ€ ์žˆ์œผ๋ฉฐ, ๋Œ€ํ‘œ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ๋„คํŠธ์›Œํฌ ์‚ฌ๋ก€๋Š” GoogLeNet ์ด ์žˆ๋‹ค. 

 

 

 

 

 

2๏ธโƒฃ  ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๋ง›๋ณด๊ธฐ 


 

๐Ÿ”น  ๋ฐ์ดํ„ฐ ์…‹ํŒ… 

 

โœ” fashion_mnist ๋ฐ์ดํ„ฐ์…‹ : torchvision ์— ๋‚ด์žฅ๋œ ์˜ˆ์ œ ๋ฐ์ดํ„ฐ๋กœ ์šด๋™ํ™”, ์…”์ธ , ์ƒŒ๋“ค ๊ฐ™์€ ์ž‘์€ ์ด๋ฏธ์ง€์˜ ๋ชจ์Œ์œผ๋กœ 10๊ฐ€์ง€ ๋ถ„๋ฅ˜ ๊ธฐ์ค€์ด ์žˆ๊ณ  28x28 ํ”ฝ์…€์˜ ์ด๋ฏธ์ง€ 7๋งŒ๊ฐœ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋‹ค. 

 

  • train_images : 0~255 ์‚ฌ์ด์˜ ๊ฐ’์„ ๊ฐ–๋Š” 28x28 ํฌ๊ธฐ์˜ ๋„˜ํŒŒ์ด ๋ฐฐ์—ด 
  • train_labels : 0์—์„œ 9๊นŒ์ง€ ์ •์ˆ˜๊ฐ’์„ ๊ฐ–๋Š” ๋ฐฐ์—ด 

 

 

 

โ‘   ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ํ˜ธ์ถœ 

 

import numpy as np 
import matplotlib.pyplot as plt 

import torch 
import torch.nn as nn 
from torch.autograd import Variable 
import torch.nn.functional as F 

import torchvision 
import torchvision.transforms as transforms # ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ์‚ฌ์šฉํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ 
from torch.utils.data import Dataset, DataLoader 


# GPU ํ˜น์€ CPU ์žฅ์น˜ ํ™•์ธ 

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

 

•  GPU ์‚ฌ์šฉ 

 

 

(1) ํ•˜๋‚˜์˜ GPU ๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ์ฝ”๋“œ 

 

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 
model = Net() 
model.to(device)

 

(2) ๋‹ค์ˆ˜์˜ GPU ๋ฅผ ์‚ฌ์šฉํ•  ๋–„ ์ฝ”๋“œ : nn.DataParallel 

 

device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 
model = Net() 
if torch.cuda.device_count() > 1 : 
	model = nn.DataParallel(net) 

model.to(device)

 

 

→ nn.DataParallel ์„ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ ๋ฐฐ์น˜ํฌ๊ธฐ๊ฐ€ ์•Œ์•„์„œ ๊ฐ GPU ๋กœ ๋ถ„๋ฐฐ๋˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์ž‘๋™ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ฐฐ์น˜ ํฌ๊ธฐ๋„ GPU ์ˆ˜๋งŒํผ ๋Š˜๋ ค์ฃผ์–ด์•ผ ํ•œ๋‹ค. 

 

 

 

โ‘ก  ๋ฐ์ดํ„ฐ์…‹ ๋‚ด๋ ค๋ฐ›๊ธฐ 

 

train_dataset = torchvision.datasets.FashionMNIST('../sample', download = True, 
                                                  transform = transforms.Compose([transforms.ToTensor()])) 

test_dataset = torchvision.datasets.FashionMNIST('../sample', download = True, train=False, 
                                                  transform = transforms.Compose([transforms.ToTensor()]))

 

•  ์ฒซ๋ฒˆ์งธ ํŒŒ๋ผ๋ฏธํ„ฐ : fashionMNIST ๋‚ด๋ ค๋ฐ›์„ ์œ„์น˜๋ฅผ ์ง€์ • 

•  download ๋ฅผ True ๋กœ ๋ณ€๊ฒฝํ•ด์ฃผ๋ฉด ์ฒซ ๋ฒˆ์งธ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์œ„์น˜์— ํ•ด๋‹น ๋ฐ์ดํ„ฐ์…‹์ด ์žˆ๋Š”์ง€ ํ™•์ธ ํ›„ ๋‚ด๋ ค๋ฐ›๋Š”๋‹ค.

•  transform : ์ด๋ฏธ์ง€๋ฅผ ํ…์„œ (0~1) ๋กœ ๋ณ€๊ฒฝํ•œ๋‹ค. 

 

 

 

โ‘ข  ๋ฐ์ดํ„ฐ ๋กœ๋”์— ์ „๋‹ฌ 

 

train_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 100) 
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size = 100)

 

•  DataLoader ๋ฅผ ์‚ฌ์šฉํ•ด ์›ํ•˜๋Š” ํฌ๊ธฐ์˜ ๋ฐฐ์น˜ ๋‹จ์œ„๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ฑฐ๋‚˜ ์ˆœ์„œ๊ฐ€ ๋ฌด์ž‘์œ„ (shuffle)๋กœ ์„ž์ด๋„๋ก ํ•  ์ˆ˜ ์žˆ๋‹ค. 

•  batch+size = 100 : ๋ฐ์ดํ„ฐ๋ฅผ 100๊ฐœ ๋‹จ์œ„๋กœ ๋ฌถ์–ด์„œ ๋ถˆ๋Ÿฌ์˜จ๋‹ค. 

 

 

 

โ‘ฃ  ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋  ํด๋ž˜์Šค ์ •์˜ 

 

labels_map = {0 : 'T-Shirt', 1 : 'Trouser', 2 : 'Pullover', 3 : 'Dress', 4 : 'Coat', 5 : 'Sandal', 6 : 'Shirt',
              7 : 'Sneaker', 8 : 'Bag', 9 : 'Ankle Boot'}

fig = plt.figure(figsize=(8,8)); # ์ถœ๋ ฅํ•  ์ด๋ฏธ์ง€์˜ ๊ฐ€๋กœ์„ธ๋กœ ๊ธธ์ด๋กœ ๋‹จ์œ„๋Š” inch 
columns = 4;
rows = 5;

for i in range(1, columns*rows +1) : 

  img_xy = np.random.randint(len(train_dataset)); 
  # 0 ~ (train_dataset ๊ธธ์ด) ๊ฐ’์„ ๊ฐ–๋Š” ๋ถ„ํฌ์—์„œ ๋žœ๋คํ•œ ์ˆซ์ž ํ•œ ๊ฐœ๋ฅผ ์ƒ์„ฑํ•˜๋ผ๋Š” ์˜๋ฏธ

  img = train_dataset[img_xy][0][0,:,:] 
  # 3์ฐจ์› ๋ฐฐ์—ด ์ƒ์„ฑ 


  # ์ด๋ฏธ์ง€ ์ถœ๋ ฅ
  fig.add_subplot(rows, columns, i) 
  plt.title(labels_map[train_dataset[img_xy][1]]) 
  plt.axis('off') 
  plt.imshow(img, cmap='gray') 

plt.show()

 

 

 

๐ŸŒ  rand, randint, randn ๋น„๊ต 

 

| np.random.randint( ) 

 

  • np.random.randint(10) : 0~10์˜ ์ž„์˜์˜ ์ˆซ์ž๋ฅผ ์ถœ๋ ฅ 
  • np.random.randint(1, 10) : 1~9 ์‚ฌ์ด์˜ ์ž„์˜์˜ ์ˆซ์ž๋ฅผ ์ž…๋ ฅ 

 

| np.random.rand( ) 

 

  • np.random.rand(8) : 0~1 ์‚ฌ์ด์˜ ์ •๊ทœํ‘œ์ค€๋ถ„ํฌ ๋‚œ์ˆ˜๋ฅผ ํ–‰๋ ฌ (1x8) ๋กœ ์ถœ๋ ฅ
  • np.random.rand(4,2) : 0~1 ์‚ฌ์ด์˜ ์ •๊ทœํ‘œ์ค€๋ถ„ํฌ ๋‚œ์ˆ˜๋ฅผ ํ–‰๋ ฌ (4x2) ๋กœ ์ถœ๋ ฅ 

 

| np.random.randn( ) 

 

  • np.random.randn(8) : ํ‰๊ท ์ด 0์ด๊ณ  ํ‘œ์ค€ํŽธ์ฐจ๊ฐ€ 1์ธ ๊ฐ€์šฐ์‹œ์•ˆ ์ •๊ทœ๋ถ„ํฌ ๋‚œ์ˆ˜๋ฅผ ํ–‰๋ ฌ (1x8) ๋กœ ์ถœ๋ ฅ 
  • np.random.randn(4,2) : ํ‰๊ท ์ด 0์ด๊ณ  ํ‘œ์ค€ํŽธ์ฐจ๊ฐ€ 1์ธ ๊ฐ€์šฐ์‹œ์•ˆ ์ •๊ทœ๋ถ„ํฌ ๋‚œ์ˆ˜๋ฅผ ํ–‰๋ ฌ (4x2) ๋กœ ์ถœ๋ ฅ

 

 

 

๐Ÿ”น  ๋ชจ๋ธ ์ƒ์„ฑ  

 

โ‘   ์‹ฌ์ธต์‹ ๊ฒฝ๋ง ๋ชจ๋ธ ์ƒ์„ฑ (ConvNet ์ด ์ ์šฉ๋˜์ง€ ์•Š์€ ๋„คํŠธ์›Œํฌ) 

 

class FashionMNIST(torch.nn.Module) : 

  # ํด๋ž˜์Šค ํ˜•ํƒœ์˜ ๋ชจ๋ธ์€ ํ•ญ์ƒ torch.nn.Module ์„ ์ƒ์†๋ฐ›๋Š”๋‹ค. 

  def __init__(self) : # ๊ฐ์ฒด๊ฐ€ ๊ฐ–๋Š” ์†์„ฑ๊ฐ’์„ ์ดˆ๊ธฐํ™” , ๊ฐ์ฒด๊ฐ€ ์ƒ์„ฑ๋  ๋–„ ์ž๋™์œผ๋กœ ํ˜ธ์ถœ๋จ 
    super(FashionMNIST,self).__init__() # FasionMNIST ๋ผ๋Š” ๋ถ€๋ชจ ํด๋ž˜์Šค๋ฅผ ์ƒ์†๋ฐ›๊ฒ ๋‹ค๋Š” ์˜๋ฏธ 
    self.fc1 = nn.Linear(in_features = 784, out_features = 256) 
    # ์„ ํ˜• ํšŒ๊ท€ ๋ชจ๋ธ : nn.Linear() 
    # in_features : ์ž…๋ ฅ์˜ ํฌ๊ธฐ ๐Ÿ‘‰ forward ๋ถ€๋ถ„์—์„œ ์ด๋ถ€๋ถ„๋งŒ ๋„˜๊ฒจ์คŒ 
    # out_features : ์ถœ๋ ฅ์˜ ํฌ๊ธฐ ๐Ÿ‘‰ forward ์—ฐ์‚ฐ์˜ ๊ฒฐ๊ณผ์— ํ•ด๋‹นํ•˜๋Š” ๊ฐ’ 

    self.drop = nn.Dropout(0.25) 
    # 0,25 ๋งŒํผ์˜ ๋น„์œจ๋กœ ํ…์„œ์˜ ๊ฐ’์ด 0์ด ๋œ๋‹ค. 
    # 0.75 ๋งŒํผ์˜ ๋น„์œจ์˜ ํ…์„œ์˜ ๊ฐ’์€ (1/(1-p)) ๋งŒํผ ๊ณฑํ•ด์ ธ์„œ ์ปค์ง„๋‹ค. 

    self.fc2 = nn.Linear(in_features = 256, out_features = 128) 
    self.fc3 = nn.Linear(in_features = 128, out_features = 10) 
  
  
  def forward(self, input_data) : # ์ˆœ์ „ํŒŒ : ์ž…๋ ฅ x ๋กœ๋ถ€ํ„ฐ ์˜ˆ์ธก๋œ y ๋ฅผ ์–ป๋Š” ๊ฒƒ 
  
  # ๋ฐ˜๋“œ์‹œ forward ๋ผ๋Š” ์ด๋ฆ„์˜ ํ•จ์ˆ˜์—ฌ์•ผ ํ•œ๋‹ค. 
  # ๊ฐ์ฒด๋ฅผ ๋ฐ์ดํ„ฐ์™€ ํ•จ๊ป˜ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ ์‹คํ–‰๋œ๋‹ค. 

    out = input_data.view(-1,784) # ํŒŒ์ดํ† ์น˜ view : reshape ๊ฐ™์€ ์—ญํ•  
    # 2์ฐจ์› ํ…์„œ๋กœ ๋ณ€๊ฒฝํ•˜๋˜, ์ฒซ๋ฒˆ์งธ ์ฐจ์›์˜ ๊ธธ์ด๋Š” ์•Œ์•„์„œ ๊ณ„์‚ฐํ•ด์ฃผ๊ณ , ๋‘๋ฒˆ์งธ ์ฐจ์›์˜ ๊ธธ์ด๋Š” 784๋ฅผ ๊ฐ–๋„๋ก ํ•ด๋‹ฌ๋ผ๋Š” ์˜๋ฏธ

    out = F.relu(self.fc1(out)) 
    out = self.drop(out) 
    out = F.relu(self.fc2(out)) 
    out = self.fc3(out) 

    return out

 

 

•  ํด๋ž˜์Šค ํ˜•ํƒœ์˜ ๋ชจ๋ธ์€ ํ•ญ์ƒ torch.nn.Module ์„ ์ƒ์†๋ฐ›๋Š”๋‹ค. 

•  __init__() ์€ ๊ฐ์ฒด๊ฐ€ ๊ฐ–๋Š” ์†์„ฑ ๊ฐ’์„ ์ดˆ๊ธฐํ™” ํ•˜๋Š” ์—ญํ• ์„ ํ•œ๋‹ค. ๊ฐ์ฒด๊ฐ€ ์ƒ์„ฑ๋  ๋•Œ ์ž๋™์œผ๋กœ ํ˜ธ์ถœ๋œ๋‹ค. 

 

•  nn  : ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ๊ตฌ์„ฑ์— ํ•„์š”ํ•œ ๋ชจ๋“ˆ์ด ๋ชจ์—ฌ์žˆ๋Š” ํŒจํ‚ค์ง€ 

•  nn.dropout(p) : p๋งŒํผ์˜ ๋น„์œจ๋กœ ํ…์„œ์˜ ๊ฐ’์ด 0์ด ๋˜๊ณ , ๋‚˜๋จธ์ง€ ๋น„์œจ ๊ฐ’์€ (1/(1-p)) ๋งŒํผ ๊ณฑํ•ด์ ธ ์ปค์ง„๋‹ค. 

 

•  forward( ) : ๊ฐ์ฒด๋ฅผ ๋ฐ์ดํ„ฐ์™€ ํ•จ๊ป˜ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ ์‹คํ–‰๋˜๋ฉฐ ์ˆœ์ „ํŒŒ ์—ฐ์‚ฐ์„ ์ง„ํ–‰ํ•œ๋‹ค. ๋ฐ˜๋“œ์‹œ forward ๋ผ๋Š” ์ด๋ฆ„์˜ ํ•จ์ˆ˜์—ฌ์•ผ ํ•œ๋‹ค. 

•  data.view( ) : ํŒŒ์ดํ† ์น˜์—์„œ reshape ๊ฐ™์€ ์—ญํ• ๋กœ ์ฐจ์›์„ ๋ฐ”๊ฟ”์ค€๋‹ค. 

 

 

 

๐ŸŒ  ํ™œ์„ฑํ™” ํ•จ์ˆ˜๋ฅผ ์ง€์ •ํ•˜๋Š” 2๊ฐ€์ง€ ๋ฐฉ๋ฒ• 

 

1. forward() ์—์„œ ์ •์˜  →  F.relu()   =   nn.functional.relu() 

 

import torch.nn.functional as F 

inputs = torch.randn(64,3,224,224) 
weights = torch.randn(64,3,3,3) 
bias = torch.randn(64) 
outputs = F.conv2d(inputs, weight, bias, padding = 1)

 

โˆ˜ ์ž…๋ ฅ๊ณผ ๊ฐ€์ค‘์น˜ ์ž์ฒด๋ฅผ ์ง์ ‘ ๋„ฃ์–ด์ฃผ์–ด์•ผ ํ•œ๋‹ค. ๊ฐ€์ค‘์น˜๋ฅผ ์ „๋‹ฌํ•ด์•ผ ํ•  ๋•Œ๋งˆ๋‹ค ๊ฐ€์ค‘์น˜ ๊ฐ’์„ ์ƒˆ๋กœ ์ •์˜ํ•ด์•ผ ํ•จ

 

 

2. __init()__ ์—์„œ ์ •์˜ → nn.ReLU() 

 

import torch.nn as nn 

inputs = torch.randn(64,3,224,224) 
conv = nn.Conv2d(in_channels = 3, out_channels = 64, kernel_size = 3, padding = 1) 
outputs = conv(inputs) 
layer = nn.Conv2d(1,1,3)

 

 

โ‘ก  ํŒŒ๋ผ๋ฏธํ„ฐ ์ •์˜ 

 

•  ๋ชจ๋ธ ํ•™์Šต ์ „์— ์†์‹คํ•จ์ˆ˜, ํ•™์Šต๋ฅ , ์˜ตํ‹ฐ๋งˆ์ด์ €์— ๋Œ€ํ•ด ์ •์˜ 

 

learning_rate = 0.001 # ํ•™์Šต๋ฅ  
model = FashionDNN() 
model.to(device) 

criterion = nn.CrossEntropyLoss() # ๋ถ„๋ฅ˜๋ฌธ์ œ ์†์‹คํ•จ์ˆ˜ 
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate) # ์˜ตํ‹ฐ๋งˆ์ด์ € 
print(model)

 

 

๐Ÿ”น  ๋ชจ๋ธ ํ•™์Šต

 

โ‘  ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง DNN ๋ชจ๋ธ ํ•™์Šต

 

 

•  ํŒŒ๋ผ๋ฏธํ„ฐ ์ •์˜ 

 

learning_rate = 0.001 # ํ•™์Šต๋ฅ  
model = FashionMNIST() 
model.to(device) 

criterion = nn.CrossEntropyLoss() # ๋ถ„๋ฅ˜๋ฌธ์ œ ์†์‹คํ•จ์ˆ˜ 
optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate) # ์˜ตํ‹ฐ๋งˆ์ด์ € 
print(model)

 

•  ๋ชจ๋ธ ํ•™์Šต 

 

num_epochs = 5 
count = 0 
loss_list = [] # ์˜ค์ฐจ 
iteration_list = [] # ๋ฐ˜๋ณตํšŸ์ˆ˜ 
accuracy_list = [] # ์ •ํ™•๋„ 
predictions_list = [] # ์˜ˆ์ธก๊ฐ’ 
labels_list = [] # ์‹ค์ œ๊ฐ’ 

for epoch in range(num_epochs) : # ์ „์ฒด ๋ฐ์ดํ„ฐ๋ฅผ 5๋ฒˆ ํ›ˆ๋ จ 

  for images, labels in train_loader : 
    images, labels = images.to(device), labels.to(device) #๐Ÿšฉ

    train = Variable(images.view(100,1,28,28))  #๐Ÿšฉ ์ž๋™๋ฏธ๋ถ„
    labels = Variable(labels) 

    outputs = model(train) 
    loss = criterion(outputs, labels) 
    optimizer.zero_grad() 
    loss.backward() 
    optimizer.step() 
    count += 1 

    if not (count%50) : # 50๋ฒˆ ๋ฐ˜๋ณต๋งˆ๋‹ค ์˜ˆ์ธก 

      total = 0 
      correct = 0 

      for images, labels in test_loader : 
        images, labels = images.to(device) , labels.to(device) 
        labels_list.append(labels) 
        test = Variable(images.view(100,1,28,28)) 
        outputs = model(test) 
        predictions = torch.max(outputs, 1)[1].to(device) 
        predictions_list.append(predictions) 
        correct += (predictions == labels).sum() 
        total += len(labels) 
      
      accuracy = correct*100/total  #๐Ÿšฉ
      loss_list.append(loss.data) 
      iteration_list.append(count) 
      accuracy_list.append(accuracy) 
    
    if not (count % 500) : # 500 ๋ฐ˜๋ณต ๋งˆ๋‹ค 
      print('๋ฐ˜๋ณต ํšŸ์ˆ˜ : {}, ์˜ค์ฐจ : {}, ์ •ํ™•๋„ : {}%'.format(count, loss.data, accuracy))

 

 

โ†ช  ๋ชจ๋ธ๊ณผ ๋ฐ์ดํ„ฐ๋Š” ๋™์ผํ•œ ์žฅ์น˜์— ์žˆ์–ด์•ผ ํ•œ๋‹ค. model.to(device) ๊ฐ€ GPU ๋ฅผ ์‚ฌ์šฉํ–ˆ๋‹ค๋ฉด, images.to(device), labels.to(device) ๋„ GPU ์—์„œ ์ฒ˜๋ฆฌ๋˜์–ด์•ผ ํ•œ๋‹ค. 

 

โ†ช  torch.autograd ํŒจํ‚ค์ง€์˜ Variable : ์—ญ์ „ํŒŒ๋ฅผ ์œ„ํ•œ ๋ฏธ๋ถ„ ๊ฐ’์„ ์ž๋™์œผ๋กœ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋กœ, ๋‚˜๋™ ๋ฏธ๋ถ„์„ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•ด์„œ ๊ผญ ์ง€์ •ํ•ด์ฃผ์–ด์•ผ ํ•œ๋‹ค. 

 

โ†ช accuracy ๋Š” ์ „์ฒด ์˜ˆ์ธก์— ๋Œ€ํ•œ ์ •ํ™•ํ•œ ์˜ˆ์ธก์˜ ๋น„์œจ๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค. 

 

 

 

 

โ‘ก ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ ์ƒ์„ฑ ๋ฐ ํ•™์Šต 

 

class FashionCNN(nn.Module) : 

  def __init__(self) : 
    super(FashionCNN, self).__init__() 
    self.layer1 = nn.Sequential( # ๐Ÿšฉ
        nn.Conv2d(in_channels = 1, out_channels = 32, kernel_size =3, padding = 1), # ๐Ÿšฉ
        nn.BatchNorm2d(32), # ๐Ÿšฉ
        nn.ReLU(), 
        nn.MaxPool2d(kernel_size=2, stride=2) # ๐Ÿšฉ
    )

    self.layer2 = nn.Sequential( 
        nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size =3), 
        nn.BatchNorm2d(64), 
        nn.ReLU(), 
        nn.MaxPool2d(2) 
    )

    self.fc1 = nn.Linear(in_features = 64*6*6, out_features = 600)
    self.drop = nn.Dropout2d(0.25) 
    self.fc2 = nn.Linear(in_features = 600, out_features = 120) 
    self.fc3 = nn.Linear(in_features = 120, out_features = 10) 

  
  def forward(self, x) : 
    out = self.layer1(x) 
    out = self.layer2(out) 
    out = out.view(out.size(0), -1) 
    out = self.fc1(out) 
    out = self.drop(out) 
    out = self.fc2(out) 
    out = self.fc3(out) 
    return out

 

 

•  nn.Sequential : __init__ ์—์„œ ์‚ฌ์šฉํ•  ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ์„ ์ •์˜ํ•˜๊ณ , forward ํ•จ์ˆ˜์—์„œ ๊ตฌํ˜„๋  ์ˆœ์ „ํŒŒ๋ฅผ layer ํ˜•ํƒœ๋กœ ๊ฐ€๋…์„ฑ์ด ๋›ฐ์–ด๋‚œ ์ฝ”๋“œ๋กœ ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค€๋‹ค. ๋ฐ์ดํ„ฐ๊ฐ€ ๊ฐ ๊ณ„์ธต์„ ์ˆœ์ฐจ์ ์œผ๋กœ ์ง€๋‚˜๊ฐˆ ๋•Œ ์‚ฌ์šฉํ•˜๋ฉด ์ข‹์€ ๋ฐฉ๋ฒ•์ด๋‹ค. 

 

•  ํ•ฉ์„ฑ๊ณฑ์ธต : ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ์„ ํ†ตํ•ด ์ด๋ฏธ์ง€ ํŠน์ง•์„ ์ถ”์ถœ 

 

nn.Conv2d( in_channels = 1, out_channels = 32, kernel_size = 3, padding = 1 ) 

๐Ÿ‘‰ in_channels : ์ž…๋ ฅ ์ฑ„๋„์˜ ์ˆ˜๋กœ, ํ‘๋ฐฑ์€ 1, ์ปฌ๋Ÿฌ๋Š” 3์„ ๊ฐ€์ง„ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค. 

โ€ป ์ฑ„๋„์€ 3์ฐจ์›์œผ๋กœ ์ƒ๊ฐํ•˜๋ฉด '๊นŠ์ด' ๋ฅผ ์˜๋ฏธํ•œ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค. 

๐Ÿ‘‰ out_channels : ์ถœ๋ ฅ ์ฑ„๋„์˜ ์ˆ˜ 

๐Ÿ‘‰ kernel_size : ์ปค๋„์€ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ์ฐพ์•„๋‚ด๊ธฐ ์œ„ํ•œ ๊ณต์šฉ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ, CNN ์—์„œ์˜ ํ•™์Šต ๋Œ€์ƒ์ด ๋ฐ”๋กœ ์ปค๋„์˜ ํŒŒ๋ผ๋ฏธํ„ฐ์ด๋‹ค. 3์œผ๋กœ ์ง€์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— ์ปค๋„์˜ ํฌ๊ธฐ๋Š” (3x3) ์ •์‚ฌ๊ฐํ˜• ๋ชจ์–‘์ด๋ผ ์ƒ๊ฐํ•˜๋ฉด ๋œ๋‹ค. 

๐Ÿ‘‰ padding : ์ถœ๋ ฅ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ์ž…๋ ฅ ๋ฐ์ดํ„ฐ ์ฃผ์œ„์— 0์„ ์ฑ„์šฐ๋Š” ๊ณผ์ •์„ ์˜๋ฏธํ•˜๋ฉฐ, ํŒจ๋”ฉ ๊ฐ’์ด ํด์ˆ˜๋ก ์ถœ๋ ฅ ํฌ๊ธฐ๊ฐ€ ์ปค์ง„๋‹ค. 

 

  BatchNorm2d : ๊ฐ ๋ฐฐ์น˜ ๋‹จ์œ„๋ณ„๋กœ ๋ฐ์ดํ„ฐ๊ฐ€ ๋‹ค์–‘ํ•œ ๋ถ„ํฌ๋ฅผ ๊ฐ€์ง€๋”๋ผ๋„ ํ‰๊ท ๊ณผ ๋ถ„์‚ฐ์„ ์ด์šฉํ•ด ์ •๊ทœํ™” ํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. ํ‰๊ท ์€ 0 ๋ถ„์‚ฐ์€ 1์ธ ๊ฐ€์šฐ์‹œ์•ˆ ํ˜•ํƒœ๋กœ ์กฐ์ •๋œ๋‹ค. 

 

 

 

 

 

•  MaxPool2d : ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์ถ•์†Œ์‹œํ‚ค๋Š” ์šฉ๋„๋กœ ์‚ฌ์šฉ๋˜๋ฉฐ, stride ํฌ๊ธฐ๊ฐ€ ์ปค์ง€๋ฉด ์ถœ๋ ฅ ํฌ๊ธฐ๊ฐ€ ์ž‘์•„์ง„๋‹ค. 

 

nn.MaxPool2d(kernel_size=2, stride =2) 

๐Ÿ‘‰ kernel_size : mxn ํ–‰๋ ฌ๋กœ ๊ตฌ์„ฑ๋œ ๊ฐ€์ค‘์น˜ 

๐Ÿ‘‰ stride : ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์— ์ปค๋„์„ ์ ์šฉํ•  ๋–„ ์ด๋™ํ•  ๊ฐ„๊ฒฉ์„ ์˜๋ฏธํ•˜๋ฉฐ ์ŠคํŠธ๋ผ์ด๋“œ ๊ฐ’์ด ์ปค์ง€๋ฉด ์ถœ๋ ฅ ํฌ๊ธฐ๋Š” ์ž‘์•„์ง„๋‹ค. 

 

•  nn.Linear : ํด๋ž˜์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํ˜•ํƒœ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐฐ์—ด ํ˜•ํƒœ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์ž‘์—…ํ•˜๋Š” ๊ณผ์ •์œผ๋กœ, Conv2d ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ํŒจ๋”ฉ, ์ŠคํŠธ๋ผ์ด๋“œ ๊ฐ’์— ๋”ฐ๋ผ ์ถœ๋ ฅ ํฌ๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์ง€๊ธฐ ๋•Œ๋ฌธ์— in_features ๊ฐ’์„ ๊ณ„์‚ฐํ•ด ์ž…๋ ฅํ•ด ์ฃผ์–ด์•ผ ํ•œ๋‹ค. (๊ณต์‹์€ ๊ต์žฌ๋ฅผ ์ฐธ๊ณ ) 

 

 

 

•  ํŒŒ๋ผ๋ฏธํ„ฐ ์ •์˜ 

 

learning_rate = 0.001
model = FashionCNN()
model.to(device)

criterion = nn.CrossEntropyLoss() # ์†์‹คํ•จ์ˆ˜ 
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # ์˜ตํ‹ฐ๋งˆ์ด์ € 
print(model)

 

 

•  ๋ชจ๋ธ ํ•™์Šต ๊ฒฐ๊ณผ (์ฝ”๋“œ๋Š” DNN๊ณผ ๋™์ผ) 

 

 

 

 

 

 

 

3๏ธโƒฃ  ์ „์ดํ•™์Šต 


 

๐Ÿ’ก ํฐ ๋ฐ์ดํ„ฐ์…‹์„ ํ™•๋ณดํ•˜๋Š”๋ฐ ๋“œ๋Š” ํ˜„์‹ค์ ์ธ ์–ด๋ ค์›€์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋“ฑ์žฅํ•œ ๋ฐฉ๋ฒ• 

 

ImageNet ์ฒ˜๋Ÿผ ์•„์ฃผ ํฐ ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ (Pre-trained model) ์˜ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ ธ์™€ ํ•ด๊ฒฐํ•˜๋ ค๋Š” ๊ณผ์ œ์— ๋งž๊ฒŒ ๋ณด์ •ํ•˜์—ฌ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. 

โญ ์ „์ดํ•™์Šต ๋ฐฉ๋ฒ•

(1) Feature extractor 
(2) Fine-tuning 

 

 

๐Ÿ”น  ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ• 

 

data_path = 'catanddog/train/'

transform = transforms.Compose(    
    [
        transforms.Resize([256,256]), 
        transforms.RandomResizedCrop(224), 
        transforms.RandomHorizontalFlip(), 
        transforms.ToTensor() 

    ]
)


train_dataset = torchvision.datasets.ImageFolder(
    data_path, transform = transform 
)


train_loader = torch.utils.data.DataLoader(
    train_dataset, 
    batch_size = 32, 
    num_workers = 8, 
    shuffle = True
)

print(len(train_dataset)) # 385

 

 

•  torchvision.transform : ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜ํ•ด ๋ชจ๋ฐ๋ฅด์ด ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ด์คŒ 

 

โ†ช Resize : ์ด๋ฏธ์ง€ ํฌ๊ธฐ ์กฐ์ • 

โ†ช RandomResizedCrop : ์ด๋ฏธ์ง€๋ฅผ ๋žœ๋คํ•œ ํฌ๊ธฐ ๋ฐ ๋น„์œจ๋กœ ์ž๋ฆ„ (for data augmentation)

โ†ช RandomHorizontalFlip : ์ด๋ฏธ์ง€๋ฅผ ๋žœ๋คํ•˜๊ฒŒ ์ˆ˜ํ‰์œผ๋กœ ๋’ค์ง‘์Œ 

โ†ช ToTensor : ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ํ…์„œ๋กœ ๋ณ€ํ™˜ 

 

 

 

๐Ÿ”น  ํŠน์„ฑ ์ถ”์ถœ ๋ฐฉ๋ฒ• 

 

๐ŸŒ   feature extractor 

 

•  ์ด๋ฏธ์ง€ ๋„ท์œผ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜จ ํ›„ ๋งˆ์ง€๋ง‰์— ์™„์ „์—ฐ๊ฒฐ์ธต ๋ถ€๋ถ„๋งŒ ์ƒˆ๋กœ ๋งŒ๋“ ๋‹ค

•  ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋„คํŠธ์›Œํฌ์˜ ํ•ฉ์„ฑ๊ณฑ์ธต (๊ฐ€์ค‘์น˜ ๊ณ ์ •) ์— ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋ฅผ ํ†ต๊ณผ ์‹œํ‚ค๊ณ  ๊ทธ ์ถœ๋ ฅ์„ ๋ฐ์ดํ„ฐ ๋ถ„๋ฅ˜๊ธฐ(fully connected layer) ์—์„œ ํ›ˆ๋ จ์‹œํ‚จ๋‹ค. 

 

 

•  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ถ„๋ฅ˜ ๋ชจ๋ธ 

 

Xception
Inception V3
ResNet50
VGG16
VGG19
MobileNet 

 

 

 

 

 

 

๐ŸŒ  ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ResNet18 ๋ชจ๋ธ ์‚ฌ์šฉ 

 

•  ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ ๋‚ด๋ ค๋ฐ›๊ธฐ 

 

resnet18 = models.resnet18(pretrained = True) 

 

 

 

•  ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ ํ•™์Šต ์œ ๋ฌด ์ง€์ • 

 

param.requires_grad = False 


๐Ÿ‘‰ ์—ญ์ „ํŒŒ ์ค‘ ํŒŒ๋ผ๋ฏธํ„ฐ๋“ค์— ๋Œ€ํ•œ ๋ณ€ํ™”๋ฅผ ๊ณ„์‚ฐํ•  ํ•„์š”๊ฐ€ ์—†์Œ์„ ๋‚˜ํƒ€๋ƒ„ 

 

# ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ ํ•™์Šต ์œ ๋ฌด ์ง€์ • 

def set_parameter_requires_grad(model, feature_extracting = True) : 
  if feature_extracting : 
    for param in model.parameters() : 
      param.requires_grad = False # ๐Ÿ“Œ ์—ญ์ „ํŒŒ ์ค‘ ํŒŒ๋ผ๋ฏธํ„ฐ๋“ค์— ๋Œ€ํ•œ ๋ณ€ํ™”๋ฅผ ๊ณ„์‚ฐํ•  ํ•„์š”๊ฐ€ ์—†์Œ


set_parameter_requires_grad(resnet18)

 

 

 

•  ResNet18 ์— ์™„์ „ ์—ฐ๊ฒฐ์ธต ์ถ”๊ฐ€ 

 

resnet18.fc = nn.Linear(512, 2)  # 2๋Š” ํด๋ž˜์Šค๊ฐ€ 2๊ฐœ๋ผ๋Š” ์˜๋ฏธ 

 

 

 

•  ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ ๊ฐ’ ํ™•์ธ 

 

# ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ ๊ฐ’ ํ™•์ธ 

for name, param in resnet18.named_parameters() : 
  if param.requires_grad : 
    print(name, param.data)

 

 

 

•  ๋ชจ๋ธ ๊ฐ์ฒด ์ƒ์„ฑ ๋ฐ ์†์‹คํ•จ์ˆ˜ ์ •์˜ 

 

# ๋ชจ๋ธ ๊ฐ์ฒด ๋ฐ ์†์‹คํ•จ์ˆ˜ 

model = models.resnet18(pretrained = True) 

for param in model.parameters() : 
  param.requires_grad = False

model.fc = torch.nn.Linear(512,2) 

for param in model.fc.parameters() : 
  # ์™„์ „์—ฐ๊ฒฐ์ธต์€ ํ•™์Šต
  param.requires_grad = True 


optimizer = torch.optim.Adam(model.fc.parameters()) 
cost = torch.nn.CrossEntropyLoss() # ์†์‹คํ•จ์ˆ˜ 
print(model)

 

 

๐Ÿ”น  ๋ฏธ์„ธ์กฐ์ • ๊ธฐ๋ฒ• 

 

 

 

 

๐ŸŒ   Fine-Tuning 

 

•  ํŠน์„ฑ ์ถ”์ถœ ๊ธฐ๋ฒ•์—์„œ ๋‚˜์•„๊ฐ€ Pre-trained ๋ชจ๋ธ๊ณผ ํ•ฉ์„ฑ๊ณฑ์ธต, ๋ฐ์ดํ„ฐ ๋ถ„๋ฅ˜๊ธฐ์˜ ๊ฐ€์ค‘์น˜๋ฅผ ์—…๋ฐ์ดํŠธ ํ•˜์—ฌ ํ›ˆ๋ จ ์‹œํ‚ค๋Š” ๋ฐฉ์‹ → ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ชฉ์ ์— ๋งž๊ฒŒ ์žฌํ•™์Šต ํ•˜๊ฑฐ๋‚˜ ๊ฐ€์ค‘์น˜ ์ผ๋ถ€๋ฅผ ์žฌํ•™์Šต ํ•˜๋Š” ๊ฒƒ 

 

•  ํŠน์„ฑ ์ถ”์ถœ → ImageNet ๋ฐ์ดํ„ฐ์˜ ์ด๋ฏธ์ง€ ํŠน์ง•๊ณผ ๊ฐ€๋ น, ์ „์ž์ƒ๊ฑฐ๋ž˜ ๋ฌผํ’ˆ ์ด๋ฏธ์ง€ ํŠน์ง•์ด ๋น„์Šทํ•œ, ๊ทธ๋Ÿฌ๋‹ˆ๊นŒ ๋ชฉํ‘œ ํŠน์„ฑ์„ ์ž˜ ์ถ”์ถœํ–ˆ๋‹ค๋Š” ๊ฐ€์ • ํ•˜์— ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚ผ ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ๋งŒ์•ฝ ํŠน์„ฑ์ด ์ž˜๋ชป ์ถ”์ถœ๋˜์—ˆ๋‹ค๋ฉด ๋ฏธ์„ธ์กฐ์ • ๊ธฐ๋ฒ•์œผ๋กœ ์ƒˆ๋กœ์šด ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•ด ๋„คํŠธ์›Œํฌ ๊ฐ€์ค‘์น˜๋ฅผ ์—…๋ฐ์ดํŠธํ•˜์—ฌ ํŠน์„ฑ์„ ๋‹ค์‹œ ์ถ”์ถœํ•œ๋‹ค. 

 

•  ๋งŽ์€ ์—ฐ์‚ฐ๋Ÿ‰์ด ํ•„์š”ํ•˜๋ฏ€๋กœ ๊ผญ GPU ์‚ฌ์šฉํ•˜๊ธฐ! 

 

 

๋ฐ์ดํ„ฐ์…‹ ํฌ๊ธฐ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ์˜ ์œ ์‚ฌ์„ฑ fine tuning ํ•™์Šต ๋ฐฉ๋ฒ• 
ํฌ๋‹ค ์ž‘๋‹ค ๋ชจ๋ธ ์ „์ฒด๋ฅผ ์žฌํ•™์Šต
ํฌ๋‹ค ํฌ๋‹ค ์™„์ „์—ฐ๊ฒฐ์ธต๊ณผ ๊ฐ€๊นŒ์šด ํ•ฉ์„ฑ๊ณฑ์ธต์˜ ๋’ท๋ถ€๋ถ„๊ณผ ๋ฐ์ดํ„ฐ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ํ•™์Šต์‹œํ‚จ๋‹ค. 
์ž‘๋‹ค ์ž‘๋‹ค ๋ฐ์ดํ„ฐ๊ฐ€ ์ ์–ด์„œ ์ผ๋ถ€ ๊ณ„์ธต์— fine-tuning ์„ ํ•˜์—ฌ๋„ ํšจ๊ณผ๊ฐ€ ์—†์„ ์ˆ˜๋„ ์žˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ์ธต์˜ ์–ด๋Š ๋ถ€๋ถ„๊นŒ์ง€ ์ƒˆ๋กœ ํ•™์Šต์‹œํ‚ฌ์ง€ ์ ๋‹นํžˆ ์„ค์ •ํ•ด์•ผ ํ•œ๋‹ค. 
์ž‘๋‹ค ํฌ๋‹ค ๋งŽ์€ ๊ณ„์ธต์— ์ ์šฉํ•˜๋ฉด ๊ณผ์ ํ•ฉ์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์™„์ „ ์—ฐ๊ฒฐ์ธต์— ๋Œ€ํ•ด์„œ๋งŒ ์ ์šฉํ•œ๋‹ค. 

 

 

 

 

 

4๏ธโƒฃ  ์„ค๋ช… ๊ฐ€๋Šฅํ•œ CNN 


 

๐Ÿ”น  Explainable CNN 

 

•  ๋”ฅ๋Ÿฌ๋‹ ์ฒ˜๋ฆฌ ๊ฒฐ๊ณผ๋ฅผ ์‚ฌ๋žŒ์ด ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ์‹์œผ๋กœ ์ œ์‹œํ•˜๋Š” ๊ธฐ์ˆ  

•  ๋ธ”๋ž™๋ฐ•์Šค : ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ๋“ค์€ ๋‚ด๋ถ€์—์„œ ์–ด๋–ป๊ฒŒ ๋™์ž‘ํ•˜๋Š”์ง€ ์„ค๋ช…ํ•˜๊ธฐ ์–ด๋ ต๋‹ค ๐Ÿ‘‰ ์ฒ˜๋ฆฌ ๊ณผ์ •์— ๋Œ€ํ•œ ์ดํ•ด๋ฅผ ์œ„ํ•ด ์ด๋ฅผ ์‹œ๊ฐํ™” ํ•  ํ•„์š”์„ฑ์ด ์žˆ๋‹ค 

•  ์‹œ๊ฐํ™” ๋ฐฉ๋ฒ• : (1) ํ•„ํ„ฐ์— ๋Œ€ํ•œ ์‹œ๊ฐํ™”  (2) ํŠน์„ฑ ๋งต์— ๋Œ€ํ•œ ์‹œ๊ฐํ™” 

 

 

๐ŸŒ   featuremap ์‹œ๊ฐํ™” 

 

•  feature map : ํ•„ํ„ฐ๋ฅผ ์ž…๋ ฅ์— ์ ์šฉํ•œ ๊ฒฐ๊ณผ 

•  feature map ์„ ์‹œ๊ฐํ™” ํ•˜๋Š” ๊ฒƒ์€ ํŠน์„ฑ ๋งต์—์„œ ์ž…๋ ฅ ํŠน์„ฑ์„ ๊ฐ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๋Š” ๊ฒƒ 

 

class LayerActivations:
    features=[]
    def __init__(self, model, layer_num):
        self.hook = model[layer_num].register_forward_hook(self.hook_fn) 
        #register_forward_hook : ์ˆœ์ „ํŒŒ์ค‘ ๊ฐ ๋„คํŠธ์›Œํฌ ๋ชจ๋“ˆ์˜ ์ž…์ถœ๋ ฅ์„ ๊ฐ€์ ธ์˜ด 
        # ๐Ÿ’ก hook ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘๊ฐ„ ๊ฒฐ๊ณผ๊ฐ’๋“ค์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค - ํŠน์„ฑ๋งต ์‹œ๊ฐํ™” 
   
    def hook_fn(self, module, input, output):
        output = output
        #self.features = output.to(device).detach().numpy()
        self.features = output.detach().numpy()


    def remove(self): 
        self.hook.remove()

 

 

โœ” hook 

 

 

 

 

 

 

 

 

๐Ÿ‘‰ ์ดˆ๋ฐ˜ ๊ณ„์ธต์—์„œ๋Š” ์ž…๋ ฅ ์ด๋ฏธ์ง€์˜ ํ˜•ํƒœ๊ฐ€ ๋งŽ์ด ์œ ์ง€๋˜๋‹ค๊ฐ€ ์ค‘๋ฐ˜ ๊ณ„์ธต์—์„œ๋Š” ๊ณ ์–‘์ด ์ด๋ฏธ์ง€ ํ˜•ํƒœ๊ฐ€ ์ ์ฐจ ์‚ฌ๋ผ์ง€๊ณ , ์ถœ๋ ฅ์ธต์— ๊ฐ€๊นŒ์šธ์ˆ˜๋ก ์›๋ž˜ ํ˜•ํƒœ๋Š” ์‚ฌ๋ผ์ง€๊ณ  ์ด๋ฏธ์ง€์˜ ํŠน์ง•๋“ค๋งŒ ์ „๋‹ฌ๋˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•ด๋ณผ ์ˆ˜ ์žˆ๋‹ค. 

 

๐Ÿ’ก  CNN ์€ ํ•„ํ„ฐ์™€ ํŠน์„ฑ ๋งต์„ ์‹œ๊ฐํ™” ํ•˜์—ฌ CNN ๊ฒฐ๊ณผ์˜ ์‹ ๋ขฐ์„ฑ์„ ํ™•๋ณดํ•  ์ˆ˜ ์žˆ๋‹ค. 

 

 

 

 

5๏ธโƒฃ  ๊ทธ๋ž˜ํ”„ ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ 


 

๐Ÿ”น Graph Convolutional Network 

 

๐ŸŒ   ๊ทธ๋ž˜ํ”„

 

•  ๋…ธ๋“œ : ์›์†Œ

•  ์—ฃ์ง€ : ๊ฒฐํ•ฉ ๋ฐฉ๋ฒ•์„ ์˜๋ฏธ 

 

→ ํ’€๊ณ ์ž ํ•˜๋Š” ๋ฌธ์ œ์— ๋Œ€ํ•œ ์ „๋ฌธ๊ฐ€ ์ง€์‹์ด๋‚˜ ์ง๊ด€ ๋“ฑ์œผ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. 

  

 

๐ŸŒ   ๊ทธ๋ž˜ํ”„ ์‹ ๊ฒฝ๋ง GNN 

 

โˆ˜  ๊ทธ๋ž˜ํ”„ ๋ฐ์ดํ„ฐ ํ‘œํ˜„ ๋ฐฉ๋ฒ• 

 

(1) ์ธ์ ‘ํ–‰๋ ฌ : ๊ด€๋ จ์„ฑ ์—ฌ๋ถ€์— ๋”ฐ๋ผ 1,0 ์œผ๋กœ ํ‘œํ˜„

(2) ํŠน์„ฑํ–‰๋ ฌ : ์ธ์ ‘ ํ–‰๋ ฌ๋งŒ์œผ๋กœ ํŠน์„ฑ์„ ํŒŒ์•…ํ•˜๊ธฐ ์–ด๋ ค์›Œ ๋‹จ์œ„ ํ–‰๋ ฌ์„ ์ ์šฉํ•œ๋‹ค. ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์—์„œ ์ด์šฉํ•  ํŠน์„ฑ์„ ์„ ํƒํ•œ๋‹ค. ๊ฐ ํ–‰์€ ์„ ํƒ๋œ ํŠน์„ฑ์— ๋Œ€ํ•ด ๊ฐ ๋…ธ๋“œ๊ฐ€ ๊ฐ€์ง€๋Š” ๊ฐ’์„ ์˜๋ฏธํ•œ๋‹ค. 

 

 

๐ŸŒ   ๊ทธ๋ž˜ํ”„ ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ GCN 

 

 

•  ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ํ•ฉ์„ฑ๊ณฑ์„ ๊ทธ๋ž˜ํ”„ ๋ฐ์ดํ„ฐ๋กœ ํ™•์žฅํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜

 

 

 

 

 

๐Ÿ‘‰  ๊ทธ๋ž˜ํ”„ ํ•ฉ์„ฑ๊ณฑ์ธต : ๊ทธ๋ž˜ํ”„ ํ˜•ํƒœ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ํ–‰๋ ฌ ํ˜•ํƒœ๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. 

 

๐Ÿ‘‰  ๋ฆฌ๋“œ์•„์›ƒ : ํŠน์„ฑ ํ–‰๋ ฌ์„ ํ•˜๋‚˜์˜ ๋ฒกํ„ฐ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ํ•จ์ˆ˜ , ํŠน์„ฑ ๋ฒกํ„ฐ์— ๋Œ€ํ•ด ํ‰๊ท ์„ ๊ตฌํ•˜๊ณ  ๊ทธ๋ž˜ํ”„ ์ „์ฒด๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ํ•˜๋‚˜์˜ ๋ฒกํ„ฐ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. 

 

 

 

•  ํ™œ์šฉ : SNS ๊ด€๊ณ„ ๋„คํŠธ์›Œํฌ, ํ•™์ˆ  ์—ฐ๊ตฌ์˜ ์ธ์šฉ ๋„คํŠธ์›Œํฌ, 3D Mesh 

 

728x90

๋Œ“๊ธ€