νκΈμ μ΄μ± 19κ°, μ€μ± 21κ°, μ’ μ± 28κ°(μμ ν¬ν¨)λ‘ μ‘°ν©νμ¬ λ§λ€ μ μλ μ΄ κΈμ μλ 11,172μ΄λ€. μ΄λ νκΈ ν°νΈ λμμ΄λκ° μ§μ μμ ν κ²½μ° λ§μ μκ°κ³Ό λΉμ©μ΄ μμλλ―λ‘ λ₯λ¬λμ ν΅ν΄ ν΄κ²°νκ³ μ νλ€. λΉ λ₯Έ μλλ‘ λ°μ νλ κΈ°μ λ‘ μΈν΄ μ±λ₯κ³Ό μ νλκ° ν₯μλμμΌλ κ·Έ νκ³μ μ μ¬μ ν μ‘΄μ¬νλ€. λ°λΌμ λ³Έ λ Όλ¬Έμ νκΈ κ΅¬μ±μμμ μ‘°ν©μ±μ μ€μ μ λ μ€νμ μ±κ³΅μ μΌλ‘ λ§μ³€μμ λ°λΌ μμΉ μ 보λ₯Ό μ΄μ©ν λ°μ΄ν°μ μ ν΅ν΄ νκΈ μμ± λͺ¨λΈμ μ±λ₯ ν₯μμ μν μ€μν λ°©ν₯μ μ 곡νλ©° ν₯ν νκΈ μμ± μ°κ΅¬μ κΈ°μ¬ν κ²μΌλ‘ κΈ°λνλ€.
The total number of letters that can be created in Hangul by combining 19 initial consonants, 21 middle consonants, and 28 final consonants (including none) is 11,172. This is something that we want to solve through deep learning, as it takes a lot of time and money for Korean font designers to work on their own. Performance and accuracy have improved due to rapidly developing technology, but limitations still exist. Therefore, as this paper successfully completed an experiment focusing on the combinability of Hangul components, it provides important directions for improving the performance of Hangul generation models through a dataset using location information and is expected to contribute to future Hangul generation research do.
π οΈ In Progress: Modify framework from Tensorflow to PyTorch
- Ubuntu 22.04.3 LTS
- NVIDIA GeForce RTX 2080 Ti
- Python 3.9.13
- Tensorflow-gpu 1.15
conda create --name decompose python=3.9.13
conda activate decompose
pip install -r requirements.txt
# change directory to datasets
# generate content images
python datasets/font2img.py --label_file datasets/characters/50characters.txt --font_dir datasets/fonts/source --output_dir datasets/images/source
# generate target images
python datasets/font2img.py --label_file datasets/characters/50characters.txt --font_dir datasets/fonts/target --output_dir datasets/images/target --start_idx 1
python datasets/separator/separator-1type.py
python datasets/separator/separator-2type.py
python datasets/separator/separator-3type.py
python datasets/separator/separator-4type.py
python datasets/separator/separator-5type.py
python datasets/separator/separator-6type.py
python datasets/combine.py
python datasets/name-modify.py
python datasets/img2tfrecord.py
python main.py --mode train --output_dir trained_model --max_epochs 500
python main.py --mode test --output_dir result --checkpoint trained_model
1) Generated result sample
2) Values of Loss, SSIM, FID
π οΈ In progress...
3) Figure: Loss comparison (MXFont, CKFont, My research)
π οΈ In progress...