Eleven NTT Papers Selected for NeurIPS 2022
NTT Laboratories and NTT Research announced that their researchers have had seven papers accepted for presentation at the prestigious NeurIPS 2022 conference, highlighting breakthroughs in artificial intelligence and machine learning. With an acceptance rate of only 25.6%, these papers showcase innovative methodologies in Bayesian estimation, few-shot learning, and more. The conference runs from November 28 to December 9, emphasizing NTT’s commitment to advancing technology for societal benefit.
- Seven papers accepted at NeurIPS 2022 indicates strong research output and expertise.
- Conference acceptance rate of 25.6% underscores high quality and relevance of NTT's research.
- None.
Accepted Papers by
The NeurIPS 2022 program committee, comprised of more than 60 senior area chairs and hundreds of experts, accepted 2,672 out of 10,411 submissions this year – an acceptance rate of only
-
Hideaki Kim andTaichi Asami of HI labs, along withHiroyuki Toda ofYokohama City University presented their paper, titled “Fast Bayesian Estimation of Point Process Intensity as Function of Covariates,” onTuesday, Nov. 29 th, at2pm (PST) . In their paper, the researchers tackle the Bayesian estimation of point process intensity as a function of covariates and propose a novel augmentation of Gaussian Cox process to derive a fast estimation algorithm that scales linearly with data size. They evaluate their algorithm using synthetic and real-world data, showing that it outperforms state-of-the-art methods in terms of predictive accuracy.
-
Daiki Chijiwa ,Shinya Yamaguchi ,Atsutoshi Kumagai andYasutoshi Ida ofCD Labs will present their paper, titled “Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks,” onWednesday, Nov. 30 th, at2pm (PST) . In their paper, they empirically show the existence of sparse deep neural network (DNN) structures that are less prone to overfit small datasets, which can be identified by meta-learning with weight-pruning. They also show that the meta-learned sparse structures can be effectively used in various domains. As a result, they help improve overlifting problems that arise when DNNs learn from small amounts of data.
-
Masaaki Nishino ,Kengo Nakamura andNorihito Yasuda ofCS Labs will present their paper, titled “Generalization Analysis on Learning with a Concurrent Verifier,” onWednesday, Nov. 30 th, at9am (PST) . Their research proposes a machine learning model with a verifier that can guarantee that the prediction result using the machine learning model which satisfies the given specification, and theoretically analyzes how the generalization performance of the model changes by using the verifier.
-
Sanjam Garg ofNTT Research in collaboration with co-authorsSomesh Jha , Saeed Mahloujifar,Mohammad Mahmoody andMingyuan Wang will present their paper, titled “Overparameterization from Computational Constraints,” onWednesday, Nov. 30 th, at2pm (PST) . Their paper asks whether the need for large, overparameterized models is due in part to the limitations of the learner, and whether the situation is exacerbated for robust (efficient) learning. The authors show that efficient learning could provably need more parameters than inefficient learning.
-
Tomoharu Iwata ofCS Labs andAtsutoshi Kumagai ofCD Labs will present their paper, titled “Sharing Knowledge for Meta-learning with Feature Descriptions,” onWednesday, Nov. 30 th, at2pm (PST) . Their paper proposes a meta-learning method that learns how to learn models using data with descriptions from various tasks. The proposed method achieves high predictive performance with a little training data in unseen tasks.
-
Atsutoshi Kumagai andYasutoshi Ida ofCD Labs , along withTomoharu Iwata ofCS Labs will present their paper, titled “Few-shot Learning for Feature Selection with Hilbert-Schmidt Independence Criterion,” onThursday, Dec. 1 st, at9am (PST) . Their paper proposes a few-shot learning method for supervised feature selection. Their method improves the feature selection performance on a small amount of data by using the information of related datasets.
-
Yusuke Tanaka ,Tomoharu Iwata andYasuhiro Fujiwara ofNTT CS Labs will present their paper, titled “Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data,” onThursday, Dec. 1 st, at2pm (PST) . Their paper proposes a Gaussian process model that incorporates the theory of Hamiltonian mechanics. Experiments on several physical systems show that the proposed model can accurately predict dynamics that follow the energy conservation or dissipation law from noisy and sparse data.
In addition, four workshop papers authored or co-authored by
-
“What shapes the loss landscape of self-supervised learning?”
by Liu Ziyin, Ekdeep Singh Lubana,Masahito Ueda andHidenori Tanaka
(NeurIPS 2022 Workshop: Self-Supervised Learning - Theory and Practice) -
“Geometric Considerations for Normalization Layers in Equivariant Neural Networks”
byMax Aalto , Ekdeep S. Lubana andHidenori Tanaka
(NeurIPS 2022 Workshop: AI for Accelerated Materials Design) -
“Mechanistic
Lens on Mode Connectivity”
by Ekdeep Singh Lubana,Eric J. Bigelow ,Robert Dick ,David Krueger andHidenori Tanaka
(NeurIPS 2022 Workshop: Distribution Shifts Connecting Methods and Applications) -
“Training physical networks like neural networks: deep physical neural networks”
byLogan G. Wright ,Tatsuhiro Onodera ,Martin M. Stein ,Tianyu Wang ,Darren T. Schachter ,Zoey Hu andPeter L. McMahon (NeurIPS 2022 Workshop: Machine Learning and the Physical Sciences)
About
About
View source version on businesswire.com: https://www.businesswire.com/news/home/20221130005340/en/
Chief Marketing Officer
+1-312-888-5412
chris.shaw@ntt-research.com
Media:
For
+1-804-362-7484
srussell@wireside.com
Source:
FAQ
What is the significance of NTT's papers accepted at NeurIPS 2022?
When will the NeurIPS 2022 conference take place?
How does the acceptance rate of NTT's papers at NeurIPS 2022 compare to overall submissions?
What topics do NTT's accepted papers at NeurIPS 2022 cover?