Reviewer Guidelines

Reviewer Guidelines

Note: These guidelines may be subject to minor revisions before the submission deadline.

Thank you for your work reviewing for ACM Multimedia 2020. We appreciate your service. Your time and effort directly contribute to maintaining the high quality of the conference and strengthening the multimedia research community.

As a Technical Program Committee (TPC) member, we expect that you are already experienced with writing excellent reviews. However, in practice we find that guidelines can help streamline the process. Starting this year, ACM Multimedia will be announcing awards for the best reviewers of the conference. The guidelines also serve as a basis for the best reviewer decisions.

The Golden Rule of reviewing: Write a review that you would like to receive yourself.

A review should be helpful to the authors, even if the review recommends the rejection of the paper.

The reviews are anonymous, but please make sure that you deliver your best work, and that you write reviews that you would be proud to associate with your name.

Best practices for reviewing:

Check the paper topic:

  • Confirm that the paper that you are reviewing falls into the topical scope of ACM Multimedia, as defined by the Call for Papers. Eventually, we rely on your judgement and the collective wisdom with your peers to decide if the paper aligns with multimedia topics.
  • Remember that the problem addressed by an ACM Multimedia paper ideally involves multi-modal data, or is expected to be related to the challenge of how people interpret and use multimedia. Papers that focus on a narrow aspect of a single modality beyond the interest of the multimedia community, and also fail to contribute new knowledge on human use of multimedia may be rejected as out of scope for the conference.
  • Although many submissions to ACM Multimedia make a technical contribution in the form of a new algorithm, not all do, nor is it a requirement of ACM Multimedia. Do not give less value to papers that carry out studies of new multimedia problems because they do not make a novel algorithmic contribution. Instead, judge these papers by the novelty of their insights and the value these insights could have for the community.

Support your statements:

  • Reviews should not just state, “It is well known that…”, but rather, they should include citations.
  • Reviews should not just state, “Important references are missing…”, but rather, they should include examples; Reviewers should list their own references only in very rare cases that these are indeed the most relevant references for the authors to refer to.
  • Reviews should not just state, “Authors should compare to the state of the art…”, but rather, they should cite specific work (i.e., peer-reviewed references) that they feel the authors should have considered and why.
  • Authors appreciate if reviewers are generous with their feedback.

Special regulations due to COVID-19:

  • The Reviewers should take into account that certain types of experiments involving people, e.g., user studies, dance experiments, were more difficult to perform this year because of the social distancing measures in many locations. Therefore, user studies, etc., should be reviewed keeping this in mind.

Respect the authors:

  • Reviews should critique “the paper”, and not the authors.
  • Reviews should try not address the authors directly, esp. not as “you”. (A direct address can be interpreted as an affront by the reader).
  • During the review process, no attempt should be made to guess the identity of the authors. (If you discover it by accident, please complete your review, but notify your AC.)

Please include in your review:

  • Statement of novelty: What does the paper contribute? Is that contribution valuable for the multimedia research community? Does the paper cover all the relevant related work, and explain how its contribution builds on the related work?
  • Statement of scientific rigor: Are the experiments well designed? Are the experiments sufficient to support the claims made by the paper? Are they reproducible? Have the authors released a resource, such as a data set or code?
  • Fixes that the authors should make for the camera ready. We can trust the authors to correct minor errors. Authors generally also will state their commitment to correcting minor errors found during the review process during the rebuttal. However, major flaws must lead to rejection, since it is not possible to confirm that the authors have actually corrected major flaws successfully (i.e., the paper does not go back to the reviewers for checking).

Ensuring review quality:

  • When you finish a review, and before you submit it, please check it over to make sure that it follows these guidelines. Checking your review is good practice and will also save the ACs the effort of chasing you.
  • Note that high-quality, accurate reviews will also ensure that the authors do not request your review to be referred to the Authors’ Advocate.

Rebuttal:

  • Reviewers should not ask for new results or experiments to be included in the rebuttals. The final recommendations should be based on the results in the original papers. Any new results (e.g., new experiments and theorems) in the rebuttals should not be considered.
  • When the authors return their rebuttals, please read them carefully. The authors devoted a great deal of effort to writing rebuttals.
  • Take the rebuttal into consideration by updating your review or otherwise responding to requests of the AC.

Policy on arXiv papers: We consider a “publication” to be a manuscript that has undergone peer review and has been accepted for publication. This means that the following points apply to arXiv papers (and any other papers available online that have not been peer reviewed):

  • If the paper that you are reviewing is available on arXiv, and has not been published elsewhere, it is an acceptable submission to ACM Multimedia, since arXiv papers are not peer reviewed and are not publications;
  • Please do not insist that the authors cite a paper that is only on arXiv and has not otherwise been published. Since arXiv papers are not all peer-reviewed publications, missing an arXiv paper does *not* count as missing related work;
  • Likewise, if the authors do not compare their work with an approach described in an arXiv paper, it does *not* count as a weakness in their experimental evaluation of their own approach;
  • If you know of an interesting arXiv paper relevant to the paper you are reviewing, you are more than welcome to tell the authors about it, but make sure you mark the reference as FYI “for your information” so that the authors know that you do not regard it as missing related work.

Responsibilities of Area Chairs:

  • Each paper is assigned to two Area Chairs (ACs).
  • Both ACs take charge of soliciting reviews, summarizing the strengths and weaknesses of each paper, and making a recommendation.
  • After both ACs make their recommendations, a consensus is expected to be reached. If there is a difference between their recommendations, the ACs will take a discussion that ideally leads to a consensus (although not required).
  • The final decision (Oral/Poster/Reject) will be made during the TPC meeting. The ACs will prepare the final decision and the meta review by summarizing the discussion during the TPC meeting, and the inputs from both ACs before they are released to the authors.
  • CMT3 is designed to manage COI papers (i.e., papers co-authored by an AC in the same track). Such a situation will only rarely happen (e.g., in smaller tracks). COI papers cannot be seen by the co-authoring AC, so the second AC will take care of it. If you have doubts about this, please contact the TPC chairs.
  • If you have any questions about the guidelines, please contact the Technical Program Chairs at mm20-tpc@sigmm.org