The idea for Rubriq was created by asking questions of the researcher, editor, and reviewer communities. We know these communities have several questions about us, and about what we're creating. We've tried to address as many of these as we can in this FAQ section.

As we continue the conversation, these questions and answers will evolve. And we don't have all the answers yet. Our FAQs list questions and answers that we've resolved, but we will also post issues on our blog, as we are looking to people like you to help find the best answers. If we have missed a key topic in these FAQs below, please email us at

General questions about Rubriq

Questions from Authors

Questions from Journals

Questions from Reviewers

General questions about Rubriq

  1. Are you trying to eliminate the traditional peer review process?

    No. At Rubriq we believe that double-blind, pre-publication peer review is the best way we currently have to evaluate research with a limited amount of bias and high degree of understood value. We looked at many models of peer review, as well as data and feedback from the community about peer review systems. Our goal was to retain all the great things about the current process, but eliminate some of the redundancy between journals.

    Traditional journals can use the Rubriq system as a screening step before their peer review process to help filter papers and accelerate their time to acceptance. The assessment of whether or not a journal is interested in a paper is still separate from an independent assessment of quality. Other types of publishers (e.g., Green or Gold open access) could choose to use the Rubriq system as their sole method of peer review. But making peer review independent does not require that the traditional process goes away—it just makes it more efficient.

  2. Isn’t peer review free?

    Just because reviewers don't often get paid for their time doesn't mean that peer review is free. It's just that it's hard to see the costs from the outside. There is an indirect cost for the value of a reviewer's time, since they are taking time away from their other work. The current traditional peer review process relies on editorial and other journal staff to keep it running, and they all need to get paid for their time and effort. The process is also usually supported by software systems that can cost publishers considerable resources to manage. Larger publishers have to pass those costs on to journals, and both journals and publishers have to pass costs on to libraries, institutions, subscribers and authors. Since the costs incurred for peer review are directly related to publication of individual articles, they are commonly worked into the publication fees paid by the author. Those are often paid by authors out of grants or other funding. So all players in the current system end up paying for a piece of peer review, one way or another.

    At Rubriq we need to pay for similar types of people and software to run our own process, but we feel we can do it more efficiently with the new workflow system we've created. The wasted time and money we want to address isn't happening within any single editorial office - it's the redundancy that's happening across the entire system as papers are re-reviewed. So although we will have some operating costs, we see our main direct costs being payments to reviewers.

    So why pay reviewers? First, philosophically we feel that reviewers should be compensated for the valuable service they provide for the scientific community. Second, providing payment makes the process more formal, and can lead to more standards, training, and recognition. Finally, in order to be able to deliver high-quality, consistent reviews in a two-week timeline, it is important to provide compensation for that commitment. And by offering reviewers the option of an honorarium in lieu of payment, some of those earnings can even go directly back into research organizations.

  3. How much does it cost?

    We are currently accepting submissions in all fields. The turnaround time will be two weeks as we continue to build our reviewer pool. Authors will receive the Rubriq scorecard with feedback from three experts, a journal recommendation report based on the score and content of the paper, and a plagiarism report from iThenticate. The price for this service is $650. If a submission is in an area of study for which we are unable to provide a journal recommendation report, the cost will be reduced.

  4. Whose side are you on - the author or the journal? Open access vs. traditional?

    We’re on everyone’s side. That’s the whole idea of independent peer review. By taking some of the filtering burden off the journals and leading them to new authors, we help them become more efficient—without requiring them to make huge changes and without any additional cost. At the same time, this enables authors to find the right journal faster, eliminating unnecessary "journal submission loops." We hope that authors get better insights into their papers so that they can continue to improve. We want to help researchers find the right match sooner so they can spend more time at the bench instead of getting caught up in endless loops of submission, rejection and resubmission.

    Both open access and traditional publishers can be a part of the Rubriq network. In fact, any type of publishing system can use the Rubriq system, scorecard and R-Score. While some startup open access journals or alternative publishers may choose to replace or outsource their peer review process with the Rubriq system, traditional publishers can use us in addition to their own processes in order to maximize operational efficiency. Although we do support the open access movement, the Rubriq system is essentially "publisher agnostic", and can be used to benefit any type of publishing.

    Our goal is to create an efficient and centralized system that benefits everyone, but also that minimizes any potential bias or conflict of interest. By existing outside any particular group, maintaining a balanced advisory board, and relying only on self-funding, we hope to be able to do just that.

  5. Who are your reviewers? What are the qualifications?

    Rubriq Reviewers will be the same academic reviewers that journals use for their peer reviews. They will be required to have strong publication records and significant peer-review experience for well-respected journals in their fields. Our reviewers will need to have completed a doctoral-level degree (MD, MD-PhD, PhD, DDS, DMD, or DVM) at an accredited university. In addition, they must be active postdoctoral or faculty-level researchers at top research universities or institutes from around the world. Many Rubriq reviewers may also act as academic editors or editors-in-chief for well-known journals. Rubriq seeks to attract peer-reviewers who enjoy reviewing and who take the time to provide well-written, detailed, constructive feedback to fellow researchers in their fields. If you are interested in applying to become a reviewer, please sign up and click on “Apply to be a reviewer”.

  6. Why would busy reviewers make Rubriq reviews a priority?

    The Rubriq system offers reviewers a real-time record of their completed reviews; a record that reviewers can use to support their academic goals. The easy online standard scorecards will make the review process go faster since the reviewer doesn't have to adjust to a new set of questions and expectations each time.

    Reviewers also have more control and flexibility about when they receive requests for reviews; they can vary their availability to review according to their own schedule. Finally, we know that all of our reviewers are extremely busy, so we offer each compensation to say thank you for their time and effort. It’s our way of acknowledging their critical role in the peer review process. We are also exploring other options for compensation (donations, credits for the reviewer's lab or society) for those that prefer not to receive direct payment.

  7. Will the scorecard apply across all fields, and all types of papers?

    The Rubriq scorecard was primarily designed to evaluate manuscripts reporting original scientific research in the biological and medical sciences, but can also be applicable to papers in psychological and environmental/ecological sciences, among others. Minor variations in the standard publication requirements of a particular field can be taken into account by the expert reviewers when they fill out the scorecard. We have recently completed work on customized scorecards for humanities/social sciences, physical sciences, engineering, math/computer science, economics, and a statistical analysis add-on that can be included with any scorecard. Contact us at if you want to confirm coverage in your area of study. The scorecard was designed primarily for original basic research, original clinical research, clinical trials, and meta-analyses/systematic reports. Though we do have a scorecard for clinical case studies, We do not currently have scorecards designed specifically for literature reviews, letters to the editor, communications, position papers, or short reports.

  8. What is the main purpose of the scorecard?

    We believe that the peer review process would benefit from structure and that a rigorous review can be achieved using a standardized instrument. The Rubriq scorecard is the cornerstone of a transparent peer review process that allows manuscripts to be uniformly evaluated outside the lens of one specific journal. By translating the essential elements of a well-constructed, well-communicated research story into quantifiable metrics, the Rubriq scorecard provides clear feedback to authors while generating thorough reviews that can be transferred from journal to journal. The ability to map scores from various sections or subsections of the scorecard to individual journals facilitates data-driven manuscript-journal matching, which helps authors quickly identify the highest impact, best fitting journal for their paper.

  9. How does the scorecard work?

    The scorecard is broken into three sections: Quality of Research, Quality of Presentation, and Novelty and Interest; these are each broken into subsections. Click to view a sample report. At the heart of the scorecard is a set of rubrics for each subsection designed to help calibrate reviewers around the key items essential to a paper following standard scientific methodology. The rubric itself consists of a series of items that may be missing or inadequate in the manuscript. As reviewers select items from these lists, suggested scores are generated based on the importance of the item. Reviewers may adjust the subsection scores according to their judgment.

  10. What's an R-Score?

    The R-Score is the overall score for a paper that combines all three of the sections. Overall section scores are calculated from the reviewer input, weighted according to the importance of each subsection, and the scores from all three reviewers are averaged. The upper limit of the overall score is determined by the novelty and interest score. A well-executed study with limited interest may have high quality scores, but a low overall score because the novelty and interest value is low.

  11. How has the scorecard been validated?

    Our scorecard has been through several rounds of construction, review, testing and re-construction. Some highlights of the development are detailed in our white paper, available for download below. It shows the process of our scorecard development and validation to date. We are currently continuing cycles of testing and feedback, especially as we continue to create new versions of the scorecard for other areas of study. PDFClick to download our white paper "The Science of the Scorecard"

  12. Why are you called "Rubriq"?

    We chose the name "Rubriq" because the word embodies a well-established set of parameters and the process of review. We felt that the core element of our entire system was the rubric we developed. This rubric, which is now patent-pending, is our new standardized protocol for evaluating the quality of research articles. It is what enables the system to be standardized and transferable across journals. We spell our rubric with a Q because our system is not just about the methods, it’s also about the resulting score and how it is used to effectively match a paper with the right journal – a rubric in itself. So for us, Q represents the quotient in the final score, but also more generally stands for Quality.

  13. What does the blue speech bubble thing mean on your logo?

    The Rubriq scorecard uses a common notation system to visually interpret a range of scores. This icon system is referred to as "Harvey Balls", and features a range of five icons with different amounts of the ball colored in. The top score on our scale is indicated by a solid blue circle with a small white dot in the center.

    In our logo, this symbol of highest quality is also turned into a speech bubble. This represents the concept that our evaluations don't just exist alone, but are a key part of an entire communication system. The R-Score and Rubriq scorecard are designed to connect research with the best-match journal. With the Rubriq system, the score goes beyond just the author and can communicate its quality to the entire research community. We made it blue because, well, we all liked the color blue.

  14. Where do you get your funding?

    Rubriq’s initial startup funding comes from Research Square, its parent company. Research Square was created with the purpose of incubating and launching startups without the burden of ties to any external funding, investors or shareholders. All Research Square divisions are privately held, and are connected by a common purpose to serve the research community as a for-benefit organization (living in the space between for-profit and non-profit). Organizations under this parent have the freedom to operate independently to create a positive impact on the research community, but also need to achieve sustainability on their own. After the initial beta/startup phases, Rubriq will be funded directly by its submission fees.

  15. How are you connected to American Journal Experts?

    Both Rubriq and American Journal Experts (AJE) are separate divisions under a common parent,Research Square. Research Square is a privately-held company founded by entrepreneur Shashi Mudunuri, who also created American Journal Experts in 2004, and who co-founded Rubriq with Keith Collier. Among other services, AJE has provided pre-submission peer review (NewReview) and Journal Recommendation services to help authors on the path to publication. The majority of Rubriq’s startup team were recruited from the same AJE departments that developed these services, so that they could apply their unique knowledge and experience to independent peer review. To learn more about us, visit the Our Team page.

Questions from Authors

  1. Does getting a score on my paper mean I don’t need to go through the review process for a journal? If not, what does it mean exactly?

    This answer will depend on the journal. Some journals may accept the reviewers’ comments, while others may prefer to use the scorecard as an initial screening but send your paper out for an additional round of peer review. Either way, journals who receive your scorecard will be allowed to see the reviewers’ comments about your manuscript as well as their credentials as scientists. Then the journal's editorial office can decide whether an additional round of review is necessary. As the network grows, any journal or publisher who chooses to replace their review process entirely with the Rubriq system will have that designation visible in our system, so you will be able to see those options clearly.

  2. What are the Sound Research Stamps?

    These stamps are based on direct responses from the reviewers that have evaluated the manuscript and allow authors and journal editors to know when a manuscript meets the standards of sound research. These stamps help provide even more context and clarity to the detailed feedback that the reviewers provide through the Rubriq Scorecard.

    There are two Sound Research Stamps available:

    Sound Research Certified: This stamp is given when the reviewers' feedback indicates that the work constitutes sound research, either with or without minor revisions to the manuscript.

    Sound Research Potential: This stamp is given when the reviewers' feedback indicates that the work has strong potential, but requires deeper revisions to the research or presentation of the work before publication.

    Awarding of these stamps is based on the reviewers’ responses to a very simple question: "Disregarding any consideration of novelty, does this work represent technically sound research?" This question is the core criteria used by many broad-scope, open access journals and allows both authors and journal editors the ability to place a manuscript in the context of the published literature.

  3. How can I trust that this scorecard is an accurate assessment of my research paper?

    Our scorecard is built around a reviewer rubric, which has been evaluated for content validity and sensitivity by reviewers, managing editors, and editors-in-chief. The weightings of each subsection and the deductions associated with the items in the rubric are being tested and refined during our beta launch based on regressions between our suggested score and the actual scores given by the reviewers. As we collect more data and map scores to journals, the accuracy of the assessment will continue to increase. For more information about the details of our validation testing, please visit our Scorecard page. In addition, our reviewers are the same researchers who review for journals, and they are carefully screened and matched with papers based on their background. Each set of reviews for a scorecard is also checked by one of our Rubriq Managing Reviewers, who check for overall quality, completeness and consistency. Reviewers receive ongoing feedback about the quality of their reviews, and how they compare to other reviewers in the system.

  4. Do I have to do reviews of other papers to get my paper reviewed?

    No. Although you have the option of applying to be a reviewer, you do not have to barter or trade your time reviewing other papers in order to submit your manuscript. Instead, we charge a straightforward fee for your Rubriq Report and pay our reviewers. This way, all authors can receive the benefit of experienced reviews, even if they are not experienced enough in their careers to be a reviewer.

  5. Why is the author the one who pays?

    We want the direct customer to be authors, so that all of the control stays with them. Authors receive value from the peer review that can help them improve their papers. But unlike the current peer review process, they can get the benefit of this feedback without burning any bridges or starting off on the wrong foot with a target journal. They also receive a transferable review - one that is not specific to a single journal's taste or subject preferences. This feedback should benefit the author regardless of the journal that is ultimately chosen. And the service gives the author recommendations of journals that are likely to be the best matches. With a two-week turnaround time for review and a recommended set of journals that are the highest-impact fit, authors will save enormous amounts of time in their publishing processes. If they publish faster, they can reap the rewards of publication sooner, and can continue to advance their research. Since the author pays, they also retain control over their score and how it is used. They can choose to keep it confidential, and can choose when and where to share it.

    If journals and/or publishers were to be our primary paying customers, our process of recommending journals to authors could be perceived as biased, since the journals in the system we recommend would be paying the bills. It would also most likely limit the scope of journals we could recommend - paying journals would want to get exclusivity over journals not in the system. This would result in less-than-optimal matches for authors, since not all journals would be represented. The way our model will put the most time back into research is if our journal network is as broad and engaged as possible and authors have ultimate control over their reviews, so that the matches we make are the most accurate. The fact that we are providing an extraordinary value for journals for free means that there are no barriers for them to participate, and we can grow the network quickly to provide the most benefit to authors.

  6. How do I share my scorecard with colleagues or journals?

    Each Rubriq Report will have a unique verification code. Share your code with colleagues or journals, and they can enter it at You can also download a printable PDF version if you prefer.

Questions from Journals

  1. What information will I see in a Rubriq Report? How will I access it?

    Authors may choose to include the Rubriq Report with their submission in PDF format, or provide a verification code for you to enter at to view the report online. The report contains the Rubriq Scorecard with comments, reviewer profiles, and an iThenticate plagiarism report. See a sample of our online scorecard here.

  2. How can you be providing this for free to journals? What’s the catch?

    We operate on an author-pay model. One major benefit to an author submitting to the Rubriq system is our journal recommendation engine, which is made more robust by journal participation. As more journals join the Rubriq network and indicate their preferences, our ability to add value for authors increases. By not charging a fee for journals to join, we eliminate any barriers to growing the network and giving the most value. Providing a useful and free tool for editors is a positive side effect of this model.

  3. Is this just for Open Access (OA) journals?

    We believe that the Rubriq system is useful for all journals, regardless of their publication model. Traditional journals may find value in the Rubriq Report to supplement their own review process or attract appropriate papers for submission. More selective journals might use R-Scores as a filter to catch “diamonds in the rough” during the triage process. Mega OA journals may find the R-Scores useful in setting an initial reception rating to help readers filter a large body of content upon initial publication. The R-Score or complete Rubriq Report could be the starting point for post-publication peer review, comments, and rating systems. Irrespective of the journal type, the Rubriq system is a flexible (and free) tool that helps journals select the best content for their readers.

  4. How do I use it? Our journal already has a review process. Are you trying to replace that?

    The decision to publish or not lies in the hands of the journal. At Rubriq, we are trying to remove the redundancy in the academic publishing process by creating a standardized instrument that is useful to all journals. How a journal chooses to use Rubriq reviews is entirely up to that journal. We have talked with newer open-access journals that plan to use the Rubriq system for their entire peer review process; however, the majority of editors see our reports as tools to filter papers, find interesting manuscripts before submission, or supplement their current review process so that more resources can be directed toward their many other tasks.

  5. What information will you provide journals about the reviewers?

    As part of the Rubriq scorecard with all reviewer comments and ratings, journals will be able to see the name of each reviewer and the institution where he or she is currently employed. We are currently exploring options for providing additional information on each reviewer. If there are specific things you would like to see, we welcome your feedback!

Questions from Reviewers

  1. Do I qualify to be a Rubriq reviewer? How do I apply?

    Rubriq reviewers must have a terminal degree in their field (MD, MD-PhD, PhD, DDS, DMD, or DVM)), hold a current academic appointment (active postdoctoral or faculty-level researchers), and have previous peer review experience. Although we are currently only accepting papers in Immunology, Cancer Research and Microbiology, we are recruiting reviewers from all STM fields in preparation for our next phases of launch. To be considered, simply sign up on our site and click on “Apply to be a reviewer” from your dashboard, then complete the application. You will hear back from our team about the next steps. Once accepted, you will see a new Reviewer Dashboard available on your account when you log in.

  2. As a reviewer, how would I be matched to papers?

    We have developed a software tool that makes highly accurate matches between manuscripts and reviewers based on a keyword matching algorithm. Papers are classified as either clinical or basic research, then matched by MeSH terms pulled from the abstract and title of the submission compared to MeSH terms from representative publications of the reviewers. The more information and keywords you provide on your reviewer profile, the better your matches will be. We also take reviewer availability and requested workload into account when distributing papers. A member of the Rubriq team reviews the suggested matches from the database, checks the information on each potential reviewer to confirm that the reviewers' expertise matches the topic, then makes a decision on who to invite for the review.

    Since we are still building our database and are trying to match expertise as specifically as possible, we expect to also recruit outside our reviewer database. When we receive a manuscript and do not have an active reviewer with the right experience for a particular manuscript, we invite reviewers who have published manuscripts in similar areas of research and who are experts in those areas. We make sure that the reviewers who review the papers are well matched to the papers based on the specific topic, model system, and methods used in the paper. It is very important to us that we find the right reviewers for the paper. Many of our current reviewers are faculty members who review for top journals in their fields.

    There is no automation in our method of choosing reviewers for manuscripts. Once we have built a large network of reviewers, our algorithm will help us identify reviewers within our existing pool to consider for the review. The Rubriq team will still make a final decision regarding the expertise of the reviewers that the algorithm suggests and how well the reviewers match to the manuscript. If we have reviewers within our network who are qualified to review that particular paper, we will make the paper available to them to claim the review. If we do not have a good match in our network, we will personally invite new reviewers with the right qualifications and expertise to review the manuscript.

  3. Do I have a choice over which papers I accept to review?

    Assignments will be made based on reviewer fit and availability. Well-matched reviewers will be given the opportunity to claim assignments that are of interest to them and that fit into their schedule. If a reviewer is not confident or has a question before accepting an assignment, he/she can contact one of our operations managers before accepting an assignment.

  4. What is the expected time commitment for Rubriq reviewers?

    Reviewers will be able to control their availability through their preferences page, where they can indicate the number and frequency of papers they prefer or set themselves to “unavailable” for longer periods of time. When a reviewer is matched to a paper, they have full access to the contents before accepting the review. Once the reviewer has accepted the paper, we expect them to honor the agreed-upon deadline.

  5. How long will it take to complete a review using this system and scorecard?

    The aim of the Rubriq scorecard is to help reviewers apply their knowledge to a review in a time-efficient and uniform manner. We provide training and guidance that reviewers then apply, using their years of experience to produce a simple yet highly accurate review. As all reviewers use the same format for every review they complete, there are no journal-specific review guidelines or report formats to learn. Always using the same scorecard means reviewers become highly familiar with the format and nuances of the scorecard very quickly and are able to effectively and professionally evaluate papers within a realistic time period.

    In our current tests of the scorecard, reviewers estimate that they have been taking from one to three hours on each review. According to a 2008 peer review report by PRC, the median hours per review was five, and the mean was nine. As we collect additional data during our beta phase, we will have a larger sample of data to provide a more accurate average time overall.

  6. What if I don’t want to accept payment, or can’t legally accept payment due to employment, Visa or other issues?

    If you are legally unable to accept payment, you will set up your Rubriq reviewer account with that specification, and will not receive any direct compensation. As the number of our "pro-bono" reviewers increases, Rubriq will be able to increase our contribution to Giving Back programs.

    For those who can accept payment, we are also exploring multiple other compensation options that include a bonus/reward system for quantity and quality of reviews, and the ability to contribute your pay into a fund for your lab, society or organization.