仕訳帳情報
Multimedia Tools and Applications
http://www.springer.com/computer/information+systems+and+applications/journal/11042
インパクト ・ ファクター:
1.53
出版社:
Springer
ISSN:
1380-7501
閲覧:
8315
追跡:
28

広告
論文募集
Multimedia Tools and Applications publishes original research articles on multimedia development and system support tools, and case studies of multimedia applications. Experimental and survey articles are appropriate for the journal. The journal is intended for academics, practitioners, scientists and engineers who are involved in multimedia system research, design and applications. All papers are peer reviewed.

Specific areas of interest include (but are not limited to):

Multimedia Tools:
Multimedia application enabling software
System software support for multimedia
Hypermedia
Performance measurement tools for multimedia
Multimedia authoring tools
System hardware support for multimedia
Multimedia databases and retrieval
Web tools and applications
Multimedia Applications:
Prototype multimedia systems and platforms
Multimedia on information superhighways

Home
Video on-demand
Interactive TV
Home shopping
Remote home care
Electronic album
Personalized electronic journals
最終更新 Dou Sun 2017-09-15
Special Issues
Special Issue on Next Applications in Multimedia Computing
提出日: 2017-11-30

We are surrounded by a vast amount of multimedia contents, which are generated by various sources, from both professionals and amateurs, and many people consume multimedia content everyday. Despite the vast amount of documents describing all types of multimedia technologies, it is about time to have a new prospective to foresee how new techniques from various areas interact with multimedia techniques or applications. For example, mobile cloud computing is a new research topic. It is nature to ask what kind of multimedia applications such a technique could apply and what potential difficulties are to be solved. As a second example, traditional virtual reality glasses are expensive, and now we have a cheap alternative (Google cardboard VR kit) with the aid of a smartphone. To this end, we may wander if we can produce immersive audio by using a smartphone. As a third example, convolutional neural network is shown to outperform traditional methods in image recognition problems. If such a technique is applied to retrieval and indexing of general multimedia contents, can better performance be achieved? We need more evidences to answer these questions. Furthermore, if multimedia content is to be consumed in a living room, the current trend is to have ultra high picture resolution (4 K and above) rendered with spatially layered audio (such as 22.2 channels arranged as top, middle, and bottom layers). To successfully deploy this type of multimedia content, we need new production machines, compression techniques, transmission media and methods, and new rendering devices and layout studies. Certainly, motion pictures with ultra high resolution can have many more applications yet to be studied. This special issue is soliciting high quality technical papers addressing research achievements, practices and challenges for emergent multimedia techniques. Original and research articles are solicited in all aspects of including theoretical studies, practical applications, new social technology and experimental prototypes. This special issue calls for original papers describing the latest developments, trends, and solutions of the promotion of multimedia techniques. Topics of interests include, but are not limited to: - Immersive audio - Multimedia indexing and retrieval - Augmented/Virtual reality - New image/audio/video compression techniques - Data mining, analysis and re-production for multimedia computing - New methods on privacy and security in multimedia - New techniques on information communication and networking in multimedia - Embedded system for multimedia applications - UI/UX for multimedia contents - Tools for multimedia content generations - Verification and testing for multimedia software
最終更新 Dou Sun 2017-09-15
Special Issue on Spatial-Temporal Feature Learning for Unconstrained Video Analysis
提出日: 2017-12-15

With the development of mobile Internet and personal devices, we are witnessing an explosive growth of video data on the Web. This has encouraged the research of video analysis. Compared to the trimmed videos in the open benchmark datasets, most of the real-world videos are unconstrained. Firstly, as captured under different conditions, unconstrained videos usually have large intra-class differences. Secondly, as captured by different devices and people, unconstrained videos may own more variants in quality. The success of hand-crafted descriptors lies in the simultaneously incorporating spatial description of each frame and temporal consistency of successive frames. Recently, researchers have tried to learn video representations from deep ConvNets, where the promising progresses were obtained owing to the breakthrough in the appropriately pooling or encoding of temporal information of video sequences in the deep neural networks. As the visual content and temporal consistency of unconstrained videos are more complex, there are still challenges in video analysis and practical applications. This special issue serves as a forum for researchers all over the world to discuss their works and recent advances in video feature learning and its applications in real world applications. Both state-of-the-art works, as well as benchmark datasets and literature reviews, are welcome for submission. This special issue seeks to present and highlight the latest developments on practical methods of unconstrained video analysis. Papers addressing interesting real-world applications are especially encouraged. Topics of interest include, but are not limited to, - Feature learning by multi-cue fusion for unconstrained video analysis - Pooling the spatial-temporal layers in deep ConvNets - End-to-end integration of RNN and CNN for video feature learning - Effective feature learning for video captioning - Ad hoc feature learning for video event detection - Adapt un-labeled videos for robust feature learning - Transfer feature learning for video analysis - Spatial-temporal hashing and indexing for large-scale video retrieval - Unconstrained video benchmark for the evaluation of future learning - Real-world applications of unconstrained video analysis with future learning, e.g., event detection, action recognition, retrieval, summarization, synthesis, and video-to-language captioning.
最終更新 Dou Sun 2017-06-14
広告
関連会議
CCFCOREQUALIS省略名完全な名前提出日通知日会議日
ICTCInternational Conference on ICT Convergence2017-07-312017-08-252017-10-18
PHMPrognostics and System Health Management Conference2017-04-152017-05-152017-07-09
BigDataServiceInternational Conference on Big Data Computing Service and Applications2017-11-302017-12-222018-03-26
bTrustBusInternational Conference on Trust, Privacy, and Security in Digital Business2016-04-042016-06-062016-09-05
AEEInternational Conference on Advances in Electrical Engineering2016-07-162016-07-252016-07-30
CIAInternational Conference on Computer, Information and Application2016-03-302016-04-152016-05-19
ICCAAEInternational Conference on Computer Applications and Applied Electronics2016-08-262016-08-312016-09-10
DCAInternational Conference on Digital Contents and Applications2015-10-152015-11-102015-12-16
ICCTIMInternational Conference on Computing Technology and Information Management2017-11-07 2017-12-08
おすすめ