-
Updated
Jul 9, 2020 - Jupyter Notebook
eda
Here are 857 public repositories matching this topic...
Hi there,
I think there might be a mistake in the documentation. The Understanding Scaled F-Score section says
The F-Score of these two values is defined as:
$$ \mathcal{F}_\beta(\mbox{prec}, \mbox{freq}) = (1 + \beta^2) \frac{\mbox{prec} \cdot \mbox{freq}}{\beta^2 \cdot \mbox{prec} + \mbox{freq}}. $$
$\beta \in \mathcal{R}^+$ is a scaling factor where frequency is favored if $\beta
SUMMARY
When you create a BOM, you can add additional attributes columns (thanks for that, it is really useful).
But it is not saved. You have to enter it each time.
SOLUTION
Quick fix, remember the added attributes during a session.
For the future, add this in the project settings when it is saved.
-
Updated
Jul 8, 2020 - Python
-
Updated
Jan 22, 2020 - Python
-
Updated
Jul 3, 2020 - Python
FuseSoC supports use flags, but doesn't document which use flags are actually set. This needs to be documented.
Currently we set:
- A target use flag target_TARGETNAME, e.g.
target_simif fusesoc is called with--target=sim. - A tool use flags tool_TOOLNAME, e.g.
tool_verilatorif fusesoc is called with--tool=verilator
Support for user-defined use flags is being developed in #26
-
Updated
Jun 10, 2020 - Python
-
Updated
Jan 22, 2018 - Java
Expected Behaviour
Odin should compare the titles by name and make sure they match. Also be case sensitive (which I think odin is).
Current Behaviour
Odin uses strcmp thus comparing the names and blank spaces not the tokens.
Possible Solution
Compare the tokens instead of using strcmp
Steps to
Adding a description for the parameters will help the users understand how to specify values for each parameter. For example, the format of the longitude in Yelp.businesses table; the maximum limit of the results that a user can expect (if we incorporate limit parameter in the future).
It would be great if there was an option to preserve the original order of variables in plot_histogram(). Currently, variables within each page of the output seem to be ordered alphabetically but the pages themselves follow the original order.
-
Updated
Dec 27, 2019 - C#
-
Updated
Mar 30, 2020 - Python
-
Updated
Nov 20, 2019 - Python
It'll be great to make -help/--help as aliases for command line argument -h. Looks like changes should be done in layApplication.cc and gtfui.cc.
It may be reasonable to introduce non-abbreviated versions for other command line arguments too.
To improve spotting differences between datasets visually
(especially when there are many columns) it would be helpful if one could sort the categorical columns by the Jensen–Shannon divergence.
The code below tries to do so but it seems to distort the labels on the y-axis. Also, in case the jsd column contains missing values, those variables are deleted from the graph.
library(in-
Updated
Apr 18, 2020 - C++
-
Updated
Oct 31, 2019 - Jupyter Notebook
-
Updated
Jul 9, 2020 - HTML
Here is a simple example for Vivado.
def vivado_resources(self):
report_path = self.out_dir + "/" + self.project_name + ".runs/impl_1/top_utilization_placed.rpt"
with open(report_path, 'r') as fp:
report_data = fp.read()
repo-
Updated
Feb 18, 2019 - Jupyter Notebook
-
Updated
Jul 10, 2020 - Verilog
Examples ideas
Examples are short, specific and self-contained articles about specific topics or possibilities.
- Understanding NeuroKit: how to see what a function does in the docs, then its code on github, then where is the code located on your machine, and where you can try to make changes/fixes
- How to contribute: once some change/fixes are mde, how to push back to
-
Updated
Feb 28, 2019 - JavaScript
-
Updated
Oct 20, 2019 - Verilog
-
Updated
Jul 8, 2018 - Python
-
Updated
Jun 23, 2020 - TypeScript
Improve this page
Add a description, image, and links to the eda topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the eda topic, visit your repo's landing page and select "manage topics."
As a user,
It would be nice to have the "Observed Value" Field be standardized to show percentages of "successful" validations, vs a mix of 0% / 100%. This causes confusion as there are different levels of validation outputs with different verbage (making someone not used to the expectations confused) I've given an example below in a screenshot for what I mean:
![image](https://user-images.g