Natural-hazard-triggered technological accidents (Natechs) pose compound risks to the process industries, yet large historical databases remain under-utilized due to unstructured narratives and keyword-based screening. In this work, we develop an automated, data-driven framework that fine-tunes generative large language models (LLMs) to jointly (i) classify Natech status and the primary hazard, (ii) extract affected unit–issue pairs, and (iii) generate brief, evidence-style justifications from incident text. Using the Texas Commission on Environmental Quality (TCEQ) air emission event database (2004–2024) as a region-specific testbed, we construct a supervised fine-tuning corpus via a schema-constrained template and evaluate the fine-tuned LLMs against LSTM and BERT baselines. The best fine-tuned model leads on every metrics, with an overall accuracy of 0.958 and macro-F1 of 0.930, while a compact 3B variant remains competitive, demonstrating the superior performance and data efficiency of pretrained transformers under constrained supervision. Applied at scale, the framework quantifies climate-related patterns in Texas. By frequency, Natech incidents form ∼6 % of statewide records, with counts surging during extreme years (hurricanes in 2005, 2008 and 2017; winter freeze in 2021). By excessive emissions, Natech contributions ∼10 % statewide and ∼14 % in coastal Texas; along the coast, hurricanes dominate and yield a disproportionately large share of Natech releases. The framework delivers single-pass, structured analytics that reduce manual effort and improve reproducibility, providing decision-ready evidence for emergency preparedness and mitigation. Looking ahead, coupling the model with retrieval-grounded weather data and human-in-the-loop audits could enable a production-grade Natech analytics agent for continuous monitoring and planning.