The idea for Rubriq was created by asking questions of the researcher, editor, and reviewer communities. We know these communities have several questions about us, and about what we're creating. We've tried to address as many of these as we can in this FAQ section.
As we continue the conversation, these questions and answers will evolve. And we don't have all the answers yet. Our FAQs list questions and answers that we've resolved, but we will also post issues on our blog, as we are looking to people like you to help find the best answers. If we have missed a key topic in these FAQs below, please email us at firstname.lastname@example.org.
No. At Rubriq we believe that double-blind, pre-publication peer review is the best way we currently have to evaluate research with a limited amount of bias and high degree of understood value. We looked at many models of peer review, as well as data and feedback from the community about peer review systems. Our goal was to retain all the great things about the current process, but eliminate some of the redundancy between journals.
Traditional journals can use the Rubriq system as a screening step before their peer review process to help filter papers and accelerate their time to acceptance. The assessment of whether or not a journal is interested in a paper is still separate from an independent assessment of quality. Other types of publishers (e.g., Green or Gold open access) could choose to use the Rubriq system as their sole method of peer review. But making peer review independent does not require that the traditional process goes away—it just makes it more efficient.
Just because reviewers don't often get paid for their time doesn't mean that peer review is free. It's just that it's hard to see the costs from the outside. There is an indirect cost for the value of a reviewer's time, since they are taking time away from their other work. The current traditional peer review process relies on editorial and other journal staff to keep it running, and they all need to get paid for their time and effort. The process is also usually supported by software systems that can cost publishers considerable resources to manage. Larger publishers have to pass those costs on to journals, and both journals and publishers have to pass costs on to libraries, institutions, subscribers and authors. Since the costs incurred for peer review are directly related to publication of individual articles, they are commonly worked into the publication fees paid by the author. Those are often paid by authors out of grants or other funding. So all players in the current system end up paying for a piece of peer review, one way or another.
At Rubriq we need to pay for similar types of people and software to run our own process, but we feel we can do it more efficiently with the new workflow system we've created. The wasted time and money we want to address isn't happening within any single editorial office - it's the redundancy that's happening across the entire system as papers are re-reviewed. So although we will have some operating costs, we see our main direct costs being payments to reviewers.
So why pay reviewers? First, philosophically we feel that reviewers should be compensated for the valuable service they provide for the scientific community. Second, providing payment makes the process more formal, and can lead to more standards, training, and recognition. Finally, in order to be able to deliver high-quality, consistent reviews in a two-week timeline, it is important to provide compensation for that commitment. And by offering reviewers the option of an honorarium in lieu of payment, some of those earnings can even go directly back into research organizations.
We are currently accepting submissions in all fields. The turnaround time will be two weeks as we continue to build our reviewer pool. Authors will receive the Rubriq scorecard with feedback from three experts, a journal recommendation report based on the score and content of the paper, and a plagiarism report from iThenticate. The price for this service is $650. If a submission is in an area of study for which we are unable to provide a journal recommendation report, the cost will be reduced.
We’re on everyone’s side. That’s the whole idea of independent peer review. By taking some of the filtering burden off the journals and leading them to new authors, we help them become more efficient—without requiring them to make huge changes and without any additional cost. At the same time, this enables authors to find the right journal faster, eliminating unnecessary "journal submission loops." We hope that authors get better insights into their papers so that they can continue to improve. We want to help researchers find the right match sooner so they can spend more time at the bench instead of getting caught up in endless loops of submission, rejection and resubmission.
Both open access and traditional publishers can be a part of the Rubriq network. In fact, any type of publishing system can use the Rubriq system, scorecard and R-Score. While some startup open access journals or alternative publishers may choose to replace or outsource their peer review process with the Rubriq system, traditional publishers can use us in addition to their own processes in order to maximize operational efficiency. Although we do support the open access movement, the Rubriq system is essentially "publisher agnostic", and can be used to benefit any type of publishing.
Our goal is to create an efficient and centralized system that benefits everyone, but also that minimizes any potential bias or conflict of interest. By existing outside any particular group, maintaining a balanced advisory board, and relying only on self-funding, we hope to be able to do just that.
Rubriq Reviewers will be the same academic reviewers that journals use for their peer reviews. They will be required to have strong publication records and significant peer-review experience for well-respected journals in their fields. Our reviewers will need to have completed a doctoral-level degree (MD, MD-PhD, PhD, DDS, DMD, or DVM) at an accredited university. In addition, they must be active postdoctoral or faculty-level researchers at top research universities or institutes from around the world. Many Rubriq reviewers may also act as academic editors or editors-in-chief for well-known journals. Rubriq seeks to attract peer-reviewers who enjoy reviewing and who take the time to provide well-written, detailed, constructive feedback to fellow researchers in their fields. If you are interested in applying to become a reviewer, please sign up and click on “Apply to be a reviewer”.
The Rubriq system offers reviewers a real-time record of their completed reviews; a record that reviewers can use to support their academic goals. The easy online standard scorecards will make the review process go faster since the reviewer doesn't have to adjust to a new set of questions and expectations each time.
Reviewers also have more control and flexibility about when they receive requests for reviews; they can vary their availability to review according to their own schedule. Finally, we know that all of our reviewers are extremely busy, so we offer each compensation to say thank you for their time and effort. It’s our way of acknowledging their critical role in the peer review process. We are also exploring other options for compensation (donations, credits for the reviewer's lab or society) for those that prefer not to receive direct payment.
The Rubriq scorecard was primarily designed to evaluate manuscripts reporting original scientific research in the biological and medical sciences, but can also be applicable to papers in psychological and environmental/ecological sciences, among others. Minor variations in the standard publication requirements of a particular field can be taken into account by the expert reviewers when they fill out the scorecard. We have recently completed work on customized scorecards for humanities/social sciences, physical sciences, engineering, math/computer science, economics, and a statistical analysis add-on that can be included with any scorecard. Contact us at email@example.com if you want to confirm coverage in your area of study. The scorecard was designed primarily for original basic research, original clinical research, clinical trials, and meta-analyses/systematic reports. Though we do have a scorecard for clinical case studies, We do not currently have scorecards designed specifically for literature reviews, letters to the editor, communications, position papers, or short reports.
We believe that the peer review process would benefit from structure and that a rigorous review can be achieved using a standardized instrument. The Rubriq scorecard is the cornerstone of a transparent peer review process that allows manuscripts to be uniformly evaluated outside the lens of one specific journal. By translating the essential elements of a well-constructed, well-communicated research story into quantifiable metrics, the Rubriq scorecard provides clear feedback to authors while generating thorough reviews that can be transferred from journal to journal. The ability to map scores from various sections or subsections of the scorecard to individual journals facilitates data-driven manuscript-journal matching, which helps authors quickly identify the highest impact, best fitting journal for their paper.
The scorecard is broken into three sections: Quality of Research, Quality of Presentation, and Novelty and Interest; these are each broken into subsections. Click to view a sample report. At the heart of the scorecard is a set of rubrics for each subsection designed to help calibrate reviewers around the key items essential to a paper following standard scientific methodology. The rubric itself consists of a series of items that may be missing or inadequate in the manuscript. As reviewers select items from these lists, suggested scores are generated based on the importance of the item. Reviewers may adjust the subsection scores according to their judgment.
The R-Score is the overall score for a paper that combines all three of the sections. Overall section scores are calculated from the reviewer input, weighted according to the importance of each subsection, and the scores from all three reviewers are averaged. The upper limit of the overall score is determined by the novelty and interest score. A well-executed study with limited interest may have high quality scores, but a low overall score because the novelty and interest value is low.
Our scorecard has been through several rounds of construction, review, testing and re-construction. Some highlights of the development are detailed in our white paper, available for download below. It shows the process of our scorecard development and validation to date. We are currently continuing cycles of testing and feedback, especially as we continue to create new versions of the scorecard for other areas of study. Click to download our white paper "The Science of the Scorecard"
We chose the name "Rubriq" because the word embodies a well-established set of parameters and the process of review. We felt that the core element of our entire system was the rubric we developed. This rubric, which is now patent-pending, is our new standardized protocol for evaluating the quality of research articles. It is what enables the system to be standardized and transferable across journals. We spell our rubric with a Q because our system is not just about the methods, it’s also about the resulting score and how it is used to effectively match a paper with the right journal – a rubric in itself. So for us, Q represents the quotient in the final score, but also more generally stands for Quality.
The Rubriq scorecard uses a common notation system to visually interpret a range of scores. This icon system is referred to as "Harvey Balls", and features a range of five icons with different amounts of the ball colored in. The top score on our scale is indicated by a solid blue circle with a small white dot in the center.
In our logo, this symbol of highest quality is also turned into a speech bubble. This represents the concept that our evaluations don't just exist alone, but are a key part of an entire communication system. The R-Score and Rubriq scorecard are designed to connect research with the best-match journal. With the Rubriq system, the score goes beyond just the author and can communicate its quality to the entire research community. We made it blue because, well, we all liked the color blue.
Rubriq’s initial startup funding comes from Research Square, its parent company. Research Square was created with the purpose of incubating and launching startups without the burden of ties to any external funding, investors or shareholders. All Research Square divisions are privately held, and are connected by a common purpose to serve the research community as a for-benefit organization (living in the space between for-profit and non-profit). Organizations under this parent have the freedom to operate independently to create a positive impact on the research community, but also need to achieve sustainability on their own. After the initial beta/startup phases, Rubriq will be funded directly by its submission fees.
Both Rubriq and American Journal Experts (AJE) are separate divisions under a common parent,Research Square. Research Square is a privately-held company founded by entrepreneur Shashi Mudunuri, who also created American Journal Experts in 2004, and who co-founded Rubriq with Keith Collier. Among other services, AJE has provided pre-submission peer review (NewReview) and Journal Recommendation services to help authors on the path to publication. The majority of Rubriq’s startup team were recruited from the same AJE departments that developed these services, so that they could apply their unique knowledge and experience to independent peer review. To learn more about us, visit the Our Team page.
Authors may choose to include the Rubriq Report with their submission in PDF format, or provide a verification code for you to enter at scorecards.rubriq.com to view the report online. The report contains the Rubriq Scorecard with comments, reviewer profiles, and an iThenticate plagiarism report. See a sample of our online scorecard here.
We operate on an author-pay model. One major benefit to an author submitting to the Rubriq system is our journal recommendation engine, which is made more robust by journal participation. As more journals join the Rubriq network and indicate their preferences, our ability to add value for authors increases. By not charging a fee for journals to join, we eliminate any barriers to growing the network and giving the most value. Providing a useful and free tool for editors is a positive side effect of this model.
We believe that the Rubriq system is useful for all journals, regardless of their publication model. Traditional journals may find value in the Rubriq Report to supplement their own review process or attract appropriate papers for submission. More selective journals might use R-Scores as a filter to catch “diamonds in the rough” during the triage process. Mega OA journals may find the R-Scores useful in setting an initial reception rating to help readers filter a large body of content upon initial publication. The R-Score or complete Rubriq Report could be the starting point for post-publication peer review, comments, and rating systems. Irrespective of the journal type, the Rubriq system is a flexible (and free) tool that helps journals select the best content for their readers.
The decision to publish or not lies in the hands of the journal. At Rubriq, we are trying to remove the redundancy in the academic publishing process by creating a standardized instrument that is useful to all journals. How a journal chooses to use Rubriq reviews is entirely up to that journal. We have talked with newer open-access journals that plan to use the Rubriq system for their entire peer review process; however, the majority of editors see our reports as tools to filter papers, find interesting manuscripts before submission, or supplement their current review process so that more resources can be directed toward their many other tasks.
As part of the Rubriq scorecard with all reviewer comments and ratings, journals will be able to see the name of each reviewer and the institution where he or she is currently employed. We are currently exploring options for providing additional information on each reviewer. If there are specific things you would like to see, we welcome your feedback!
Rubriq reviewers must have a terminal degree in their field (MD, MD-PhD, PhD, DDS, DMD, or DVM)), hold a current academic appointment (active postdoctoral or faculty-level researchers), and have previous peer review experience. Although we are currently only accepting papers in Immunology, Cancer Research and Microbiology, we are recruiting reviewers from all STM fields in preparation for our next phases of launch. To be considered, simply sign up on our site and click on “Apply to be a reviewer” from your dashboard, then complete the application. You will hear back from our team about the next steps. Once accepted, you will see a new Reviewer Dashboard available on your account when you log in.
We have developed a software tool that makes highly accurate matches between manuscripts and reviewers based on a keyword matching algorithm. Papers are classified as either clinical or basic research, then matched by MeSH terms pulled from the abstract and title of the submission compared to MeSH terms from representative publications of the reviewers. The more information and keywords you provide on your reviewer profile, the better your matches will be. We also take reviewer availability and requested workload into account when distributing papers. A member of the Rubriq team reviews the suggested matches from the database, checks the information on each potential reviewer to confirm that the reviewers' expertise matches the topic, then makes a decision on who to invite for the review.
Since we are still building our database and are trying to match expertise as specifically as possible, we expect to also recruit outside our reviewer database. When we receive a manuscript and do not have an active reviewer with the right experience for a particular manuscript, we invite reviewers who have published manuscripts in similar areas of research and who are experts in those areas. We make sure that the reviewers who review the papers are well matched to the papers based on the specific topic, model system, and methods used in the paper. It is very important to us that we find the right reviewers for the paper. Many of our current reviewers are faculty members who review for top journals in their fields.
There is no automation in our method of choosing reviewers for manuscripts. Once we have built a large network of reviewers, our algorithm will help us identify reviewers within our existing pool to consider for the review. The Rubriq team will still make a final decision regarding the expertise of the reviewers that the algorithm suggests and how well the reviewers match to the manuscript. If we have reviewers within our network who are qualified to review that particular paper, we will make the paper available to them to claim the review. If we do not have a good match in our network, we will personally invite new reviewers with the right qualifications and expertise to review the manuscript.
Assignments will be made based on reviewer fit and availability. Well-matched reviewers will be given the opportunity to claim assignments that are of interest to them and that fit into their schedule. If a reviewer is not confident or has a question before accepting an assignment, he/she can contact one of our operations managers before accepting an assignment.
Reviewers will be able to control their availability through their preferences page, where they can indicate the number and frequency of papers they prefer or set themselves to “unavailable” for longer periods of time. When a reviewer is matched to a paper, they have full access to the contents before accepting the review. Once the reviewer has accepted the paper, we expect them to honor the agreed-upon deadline.
The aim of the Rubriq scorecard is to help reviewers apply their knowledge to a review in a time-efficient and uniform manner. We provide training and guidance that reviewers then apply, using their years of experience to produce a simple yet highly accurate review. As all reviewers use the same format for every review they complete, there are no journal-specific review guidelines or report formats to learn. Always using the same scorecard means reviewers become highly familiar with the format and nuances of the scorecard very quickly and are able to effectively and professionally evaluate papers within a realistic time period.
In our current tests of the scorecard, reviewers estimate that they have been taking from one to three hours on each review. According to a 2008 peer review report by PRC, the median hours per review was five, and the mean was nine. As we collect additional data during our beta phase, we will have a larger sample of data to provide a more accurate average time overall.
If you are legally unable to accept payment, you will set up your Rubriq reviewer account with that specification, and will not receive any direct compensation. As the number of our "pro-bono" reviewers increases, Rubriq will be able to increase our contribution to Giving Back programs.
For those who can accept payment, we are also exploring multiple other compensation options that include a bonus/reward system for quantity and quality of reviews, and the ability to contribute your pay into a fund for your lab, society or organization.