Home Description Explore Download People Sign In Upload History Logout

HAKE: Human Activity Knowledge Engine

MVIG - Shanghai Jiao Tong University

Introduction

Human Activity Knowledge Engine (HAKE) aims at promoting human activity understanding. As a large-scale knowledge base, HAKE is built upon existing activity datasets and provides human body part level atomic action labels (Part States). Given a human box, Activity2Vec will convert it into a fixed-size vector combining visual and linguistic features for diverse downstream tasks, i.e. image/video action recognition/detection, captioning, VQA, visual reasoning, image retrieval, etc. Conventional instance-based methods enhanced with HAKE outperform SOTA approaches on several large-scale activity benchmarks (HICO, HICO-DET, V-COCO, AVA, etc).

Now we are enriching HAKE to make it become a general research platform of knowledge extraction and causal inference. Come and join us!

Project

1) HAKE-Data: HAKE-HICO, HAKE-HICO-DET, HAKE-Large, Extra-40-verbs.

2) HAKE-Action: SOTA action understanding methods and the corresponding HAKE-enhanced versions (TIN).

3) HAKE-3D: 3D human-object representation for action understanding (DJ-RN).

4) HAKE-Object: object knowledge learner to advance action understanding (SymNet).

5) HAKE-A2V: (coming soon) Activity2Vec, a general activity feature extractor based on HAKE data, converts a human (box) to a fixed-size vector, PaSta and action scores.

6) HOI Learning List: a list of recent HOI (Human-Object Interaction) papers, code, datasets and leaderboard on widely-used benchmarks.

News

(Place your mouse to view big pictures)
122 K+
Images
156
Activity Classes
247 K+
Human Instances
220 K+
Object Instances
345 K+
Instance Actions
93
Part State Classes
7 M+
Human Part States
1 M+
Object States
{{ pageviewNumber }}
pageviews
{{ uploadNumber }}
uploads