会議情報
FLLM 2025: International Symposium on Foundation and Large Language Models
https://fllm-conference.org/2025/提出日: |
2025-08-15 Extended |
通知日: |
2025-09-15 |
会議日: |
2025-11-25 |
場所: |
Vienna, Austria |
年: |
3 |
閲覧: 7523 追跡: 0 出席: 0
論文募集
With the emergence of foundation models (FMs) and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.
The International Conference on Foundation and Large Language Models (FLLM) addresses the architectures, applications, challenges, approaches, and future directions. We invite the submission of original papers on all topics related to FLLMs, with special interest in but not limited to:
Architectures and Systems
Transformers and Attention
Bidirectional Encoding
Autoregressive Models
Massive GPU Systems
Prompt Engineering
Multimodal LLMs
Fine-tuning
Challenges
Hallucination
Cost of Creation and Training
Energy and Sustainability Issues
integration
Safety and Trustworthiness
Interpretability
Fairness
Social Impact
Future Directions
Generative AI
Explainability and EXplainable AI
Retrieval Augmented Generation (RAG)
Federated Learning for FLLM
Large Language Models Fine-Tuning on Graphs
Data Augmentation
Natural Language Processing Applications
Generation
Summarization
Rewrite
Search
Question Answering
Language Comprehension and Complex Reasoning
Clustering and Classification
Applications
Natural Language Processing
Communication Systems
Security and Privacy
Image Processing and Computer Vision
Life Sciences
Financial Systems
最終更新 Dou Sun 2025-08-10
関連会議
関連仕訳帳
| CCF | 完全な名前 | インパクト ・ ファクター | 出版社 | ISSN |
|---|---|---|---|---|
| c | Information & Management | 8.2 | Elsevier | 0378-7206 |
| b | Information Processing & Management | 6.9 | Elsevier | 0306-4573 |
| IEEE/ACM Transactions on Audio Speech and Language Processing | 5.1 | IEEE | 2329-9290 | |
| Information Systems Management | 3.9 | Taylor & Francis | 1058-0530 | |
| c | Computer Speech and Language | 3.4 | Elsevier | 0885-2308 |
| c | Proceedings of the ACM on Programming Languages | 2.8 | ACM | 2475-1421 |
| Foundations of Computational Mathematics | 2.7 | Springer | 1615-3375 | |
| Journal of Computer Languages | 1.8 | Elsevier | 2665-9182 | |
| a | ACM Transactions on Programming Languages and Systems | 1.500 | ACM | 0164-0925 |
| c | Computer Animation and Virtual Worlds | 0.900 | John Wiley & Sons, Ltd. | 1546-427X |