Technological advancement in automobile production, regulations, sales, and services leads to extensive technical documents and manuals. However, lengthy documentation, language barriers make it challenging to find quickly, highlighting the importance of summarization to bridge this knowledge gap. The Large Language Models (LLMs) have demonstrated impressive capabilities in tasks like Q&A, coding, drafting, and scientific acumen, and are commonly used to condense many complex documents. Despite LLMs being widely used in various sectors like research and journalism, LLMs face challenges in reliably summarizing content. Their word-by-word tokenization can sometimes fail to capture the true meaning and context, especially for abstractive summaries. This paper explores Large Concept Models (LCMs), which operate on principles of semantic reasoning, cross-modality integration, and hierarchical structuring at a higher conceptual level. This makes LCMs language and modality agnostic