Home Description Explore Download People Sign In Upload History Logout

HAKE: Human Activity Knowledge Engine

MVIG - Shanghai Jiao Tong University


Human Activity Knowledge Engine (HAKE) aims at promoting human activity understanding. As a large-scale knowledge base, HAKE is built upon existing activity datasets and provides human body part level atomic action labels (Part States). Given a human box, Activity2Vec will convert it into a fixed-size vector combining visual and linguistic features for diverse downstream tasks, i.e. image/video action recognition/detection, captioning, VQA, visual reasoning, image retrieval, etc. Conventional instance-based methods enhanced with HAKE outperform SOTA approaches on several large-scale activity benchmarks (HICO, HICO-DET, V-COCO, AVA, etc).

Now we are enriching HAKE to make it become a general research platform of knowledge extraction and causal inference. Come and join us!


1) HAKE-Data (CVPR'18/20): HAKE-HICO, HAKE-HICO-DET, HAKE-Large, Extra-40-verbs.

2) HAKE-Action-TF, HAKE-Action-Torch (CVPR'18/19/20, NeurIPS'20, TPAMI'21): SOTA action understanding methods and the corresponding HAKE-enhanced versions (TINIDN).

3) HAKE-3D (CVPR'20): 3D human-object representation for action understanding (DJ-RN).

4) HAKE-Object (CVPR'20): object knowledge learner to advance action understanding (SymNet).

5) HAKE-A2V (CVPR'20): Activity2Vec, a general activity feature extractor based on HAKE data, converts a human (box) to a fixed-size vector, PaSta and action scores.

6) Halpe: a joint project under Alphapose and HAKE, full-body human keypoints (body, face, hand, 136 points) of 50,000 HOI images.

7) HOI Learning List: a list of recent HOI (Human-Object Interaction) papers, code, datasets and leaderboard on widely-used benchmarks.


(Place your mouse to view big pictures)
122 K+
Activity Classes
247 K+
Human Instances
220 K+
Object Instances
345 K+
Instance Actions
Part State Classes
7 M+
Human Part States
1 M+
Object States
{{ pageviewNumber }}
{{ uploadNumber }}