有一個(gè)做場(chǎng)景動(dòng)畫的網(wǎng)站怎么提交百度收錄
精華置頂
墻裂推薦!小白如何1個(gè)月系統(tǒng)學(xué)習(xí)CV核心知識(shí):鏈接
點(diǎn)擊@CV計(jì)算機(jī)視覺,關(guān)注更多CV干貨
論文已打包,點(diǎn)擊進(jìn)入—>下載界面
點(diǎn)擊加入—>CV計(jì)算機(jī)視覺交流群
1.【基礎(chǔ)網(wǎng)絡(luò)架構(gòu):Transformer】(NeurIPS2023)MCUFormer: Deploying Vision Tranformers on Microcontrollers with Limited Memory
-
論文地址:https://arxiv.org//pdf/2310.16898
-
開源代碼(即將開源):https://github.com/liangyn22/MCUFormer
2.【Open-Vocabulary Object Detection】LP-OVOD: Open-Vocabulary Object Detection by Linear Probing
-
論文地址:https://arxiv.org//pdf/2310.17109
-
開源代碼(即將開源):https://github.com/VinAIResearch/LP-OVOD
3.【視頻異常檢測(cè)】A Coarse-to-Fine Pseudo-Labeling (C2FPL) Framework for Unsupervised Video Anomaly Detection
-
論文地址:https://arxiv.org//pdf/2310.17650
-
開源代碼(即將開源):https://github.com/AnasEmad11/C2FPL
4.【視頻超分辨率重建】(WACV2024)Scale-Adaptive Feature Aggregation for Efficient Space-Time Video Super-Resolution
-
論文地址:https://arxiv.org//pdf/2310.17294
-
開源代碼:https://github.com/megvii-research/WACV2024-SAFA
5.【圖像增強(qiáng)】(NeurIPS2023)Global Structure-Aware Diffusion Process for Low-Light Image Enhancement
-
論文地址:https://arxiv.org//pdf/2310.17577
-
開源代碼(即將開源):https://github.com/jinnh/GSAD
6.【領(lǐng)域泛化】SPA: A Graph Spectral Alignment Perspective for Domain Adaptation
-
論文地址:https://arxiv.org//pdf/2310.17594
-
開源代碼(即將開源):https://github.com/CrownX/SPA
7.【多模態(tài)】(EMNLP2023)Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
-
論文地址:https://arxiv.org//pdf/2310.17530
-
開源代碼:https://github.com/coastalcph/gender-neutral-vl
8.【多模態(tài)】(NeurIPS2023)Cross-modal Active Complementary Learning with Self-refining Correspondence
-
論文地址:https://arxiv.org//pdf/2310.17468
-
開源代碼(即將開源):https://github.com/QinYang79/CRCL
9.【自動(dòng)駕駛】Drive Anywhere: Generalizable End-to-end Autonomous Driving with Multi-modal Foundation Models
-
論文地址:https://arxiv.org//pdf/2310.17642
-
工程主頁(yè):Drive Anywhere: Generalizable End-to-end Autonomous Driving with Multi-modal Foundation Models
-
開源代碼(即將開源):https://github.com/zswang666/drive-anywhere
10.【自動(dòng)駕駛:BEV】BEVContrast: Self-Supervision in BEV Space for Automotive Lidar Point Clouds
-
論文地址:https://arxiv.org//pdf/2310.17281
-
開源代碼:https://github.com/valeoai/BEVContrast
11.【自動(dòng)駕駛:協(xié)同感知】(WACV2024)MACP: Efficient Model Adaptation for Cooperative Perception
-
論文地址:https://arxiv.org//pdf/2310.16870
-
開源代碼:https://github.com/PurdueDigitalTwin/MACP
12.【NeRF】4D-Editor: Interactive Object-level Editing in Dynamic Neural Radiance Fields via 4D Semantic Segmentation
-
論文地址:https://arxiv.org//pdf/2310.16858
-
工程主頁(yè):4D-Editor: Interactive Object-level Editing in Dynamic Neural Radiance Fields via 4D Semantic Segmentation
-
代碼即將開源
論文已打包,下載鏈接
CV計(jì)算機(jī)視覺交流群
群內(nèi)包含目標(biāo)檢測(cè)、圖像分割、目標(biāo)跟蹤、Transformer、多模態(tài)、NeRF、GAN、缺陷檢測(cè)、顯著目標(biāo)檢測(cè)、關(guān)鍵點(diǎn)檢測(cè)、超分辨率重建、SLAM、人臉、OCR、生物醫(yī)學(xué)圖像、三維重建、姿態(tài)估計(jì)、自動(dòng)駕駛感知、深度估計(jì)、視頻理解、行為識(shí)別、圖像去霧、圖像去雨、圖像修復(fù)、圖像檢索、車道線檢測(cè)、點(diǎn)云目標(biāo)檢測(cè)、點(diǎn)云分割、圖像壓縮、運(yùn)動(dòng)預(yù)測(cè)、神經(jīng)網(wǎng)絡(luò)量化、網(wǎng)絡(luò)部署等多個(gè)領(lǐng)域的大佬,不定期分享技術(shù)知識(shí)、面試技巧和內(nèi)推招聘信息。
想進(jìn)群的同學(xué)請(qǐng)?zhí)砑游⑿盘?hào)聯(lián)系管理員:PingShanHai666。添加好友時(shí)請(qǐng)備注:學(xué)校/公司+研究方向+昵稱。
推薦閱讀:
CV計(jì)算機(jī)視覺每日開源代碼Paper with code速覽-2023.10.26
CV計(jì)算機(jī)視覺每日開源代碼Paper with code速覽-2023.10.25
CV計(jì)算機(jī)視覺每日開源代碼Paper with code速覽-2023.10.24
CV計(jì)算機(jī)視覺每日開源代碼Paper with code速覽-2023.10.23
使用目標(biāo)之間的先驗(yàn)關(guān)系提升目標(biāo)檢測(cè)器性能
HSN:微調(diào)預(yù)訓(xùn)練ViT用于目標(biāo)檢測(cè)和語義分割,華南理工和阿里巴巴聯(lián)合提出
EViT:借鑒鷹眼視覺結(jié)構(gòu),南開大學(xué)等提出ViT新骨干架構(gòu),在多個(gè)任務(wù)上漲點(diǎn)
如何優(yōu)雅地讀取網(wǎng)絡(luò)的中間特征?
港科大提出適用于夜間場(chǎng)景語義分割的無監(jiān)督域自適應(yīng)新方法