Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

best practices for metrics gathering #11

Open
alee opened this issue Nov 18, 2021 · 8 comments
Open

best practices for metrics gathering #11

alee opened this issue Nov 18, 2021 · 8 comments
Assignees

Comments

@alee
Copy link
Member

alee commented Nov 18, 2021

Identify and develop strategies for gathering and using metrics for repository / registry managers.

These include but are not limited to software citation metrics (who’s using the software, at what career stage, what industry, where, etc) while being respectful of privacy concerns, GDPR, etc. Best practices for adopting analytics packages (matomo vs GA for example) and UI/UX best practices for gathering analytics in the least annoying way possible

  1. what are some general purpose things we should be looking to collect?
  2. what can repositories/registries do to help support software citation metrics more broadly (use DOIs, disseminations in COiNS, zotero support ...?)
  3. anything else?
@alee
Copy link
Member Author

alee commented Nov 18, 2021

I'm happy to help organize but if someone would like to step up would be glad to defer to them due to a conflicting meeting for the Open Modeling Foundation that week.

Still, I should be able to sneak away for a morning or afternoon. I'll update this availability slot ASAP when the agenda for that meeting is finalized

I only have availability between 1000 to 1600 MST (UTC-7) unfortunately.

@mbjones
Copy link

mbjones commented Nov 19, 2021

My unsolicited 2 cents. This is closely related to the effort standardizing the collection and reporting of Data Citation and Usage metrics that has been shepherded via Make Data Count and Scholix. Metrics are challenging. Adoption of comparable methods for collecting metrics is even more challenging, as evidenced by the issues with uptake of the COUNTER Code of Practice for Research Data. You might be interested that several of the folks from MakeDataCount (e.g., @chodacki, @mfenner, @dlowenberg) are currently involved in the effort to create v2 of the COUNTER Code for RD, and some of the main issues being considered are how to handle repositories that have mixes of data, publications, and software, and how to handle mixed research objects that contain both data, software, text, and other scholarly products. Seems like an opportunity for collaboration here between the data and the software worlds, and to have some degree of consistency between those worlds for metrics.

@alee
Copy link
Member Author

alee commented Nov 22, 2021

Thanks @mbjones! We've been loosely following Make Data Count and the other initiatives you mentioned (comses/comses.net#374) and this is an excellent idea to harmonize our efforts.

Hopefully some of the interested parties will be able to participate in this session!

@mutanthumb
Copy link

Hi I'm interested in this topic! I'm in the Software Citation #Hackathon

@alee
Copy link
Member Author

alee commented Dec 6, 2021

Great! Apologies for the late reply, but we'll be starting at 1000 MST (1700 UTC)

Zoom link: https://asu.zoom.us/j/88391150994?pwd=T0FZTXZOc0JZZzc3L3VIK1BqRWJ0Zz09

@alee
Copy link
Member Author

alee commented Dec 6, 2021

@alee
Copy link
Member Author

alee commented Dec 6, 2021

Key takeaways / action items

It's important to ensure that if someone takes the time to properly archive their work that they receive credit when downstream products cite that work. Getting these scholarly metrics right is critical to establish incentives for people to Do the Right Thing.

However, some of this appears to be out of our hands and more in the purview of the aggregators / DataCite / Crossref / Scopus / PubMed Central / ISI web of science / Google Scholar / etc. A core part of the issues below are to identify concrete things scientific software registries and repositories can do to make their jobs easier and create the citation / knowledge graph for scientific publications of the future.

  1. focus on scholarly metrics related to citation, references, reuse
  2. come up with a short checklist for registries / repository maintainers on how to integrate with make data count / project counter
  3. create a support group / collect user experiences from early adopters trying to make this work with DataCite, Crossref, etc
  4. identify how we as repository / registry managers can share things like views and download metrics with DataCite e.g., https://support.datacite.org/docs/displaying-usage-and-citations-in-your-repository
  5. identify the minimal set of scholarly metadata necessary for aggregators: ORCID for all contributors (along with role? Proposal for author roles in CodeMeta v3 codemeta/codemeta#240), ROR, DOI or other PID to referenced resource, resource type, license, open-access vs closed-source open-metadata, others?
  6. establish workflows and guidance for authors, curators, journals to ensure that publications have cited their data and software properly and mechanisms to augment published articles with new citations to data or software https://support.datacite.org/docs/contributing-data-citations#

@mutanthumb
Copy link

Hi again Allen, I'm not seeing a way to officially join SciCodes. Any advice would be great! -susan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants