By combining artificial intelligence and multi-omics analysis, Hanmi Pharmaceutical accelerated its H.O.P. project’s obesity drug candidate HM17321 to clinical trials in record time.
Hanmi Pharmaceutical’s flagship H.O.P. (Hanmi Obesity Pipeline) project aims to develop next-generation obesity treatments. Among its candidates, HM17321—a compound that reduces fat while preserving or even increasing muscle—has emerged as a standout. While Hanmi’s July press release mentioned the use of AI in the program, details of its application were revealed on October 22 during a joint new drug development symposium at the 2025 Fall International Conference of the Pharmaceutical Society of Korea.
The event, held at COEX Magok Convention Center and co-hosted with the Korea Pharmaceutical and Bio-Pharma Manufacturers Association (KPBMA) for its 80th anniversary, featured Executive Director Hae-Min Jeon, who presented how Hanmi cut its timeline to clinical trials by half.

AI-driven Peptide Structure Optimization
Jeon introduced HM17321, a peptide-based drug targeting the long-standing challenge of muscle loss in obesity treatment. Unlike existing GLP-1 agonists such as semaglutide—known to cause up to 40% of weight loss from muscle—HM17321 promotes muscle gain.
AI was integrated from the earliest molecular design stage. The team sought a peptide that selectively activates the CRF R2 receptor—which regulates metabolism and stress response—without triggering the CRF R1 receptor that induces steroid secretion.
Using AlphaFold and a peptide sequence database, Hanmi identified early candidates outperforming a competitor’s Phase 2 compound in both selectivity and activity. Refinement came as experimental results were continuously fed back into the AI model, coupled with molecular dynamics simulations to capture receptor flexibility. Predictive accuracy for cAMP activity exceeded 70%.
“Now we can predict potency, selectivity, and toxicity simultaneously just from the sequence,” said Jeon. “AI has cut the development time by more than threefold compared with previous projects.”

Measuring Humans through Mouse Proteomes
Hanmi also used machine learning to translate preclinical results into human predictions. Leveraging the UK Biobank’s proteomic dataset of 53,000 individuals, the company reversed the model’s direction: feeding in mouse proteomic data to simulate how equivalent signals might appear in humans.
Using the SOMAscan platform, Hanmi analyzed 4,000–5,000 mouse blood proteins and input the data into the model. The resulting profile resembled that of humans with reduced fat and higher muscle mass—matching HM17321’s intended effect.
Jeon acknowledged limitations in cross-species differences and analytical methods, noting that Hanmi developed an internal bridging tool to align datasets. “Whether these correlations hold must be verified in clinical trials,” she said, adding that large-scale data collection will follow to confirm and refine the model.
Despite uncertainties, Jeon stressed the approach’s value in predicting human reproducibility, a major hurdle in innovative drug development. “Monkey studies remain the gold standard,” she noted, “but this gives us a new tool to reduce translational risk before human studies.”
AI-powered Omics Delivers Proof and Confidence
To validate HM17321’s mechanism, Hanmi combined AI with LC-MS–based proteomics. Muscle tissue, rich in contraction proteins, poses analytical challenges—most studies detect only 6,000–8,000 proteins. Hanmi identified 10,000–11,000 in a single run, confirming activation of the mTOR pathway, increased protein synthesis, suppressed degradation, and enhanced glucose utilization—all signaling muscle growth.
“These patterns resemble post-resistance training responses,” Jeon said. “First-in-class drugs often lack clear mechanistic proof, but omics gives us the confidence to map unseen pathways.”
Following these discoveries, Hanmi submitted an IND application to the U.S. FDA in September, just 30 months after project initiation—over twice as fast as standard timelines.
Jeon concluded, “The key is not how much AI you use, but how precisely you define the question. Target one problem, build the right tool, refine through feedback, and optimize continuously.”
